Author: researchudit@gmail.com

  • Capital, Too Early

    Capital, Too Early

    Notes From Someone Who Touched Money Too Early

    Most founders encounter capital late.

    They graduate.
    They build something.
    Then eventually they raise money.

    My experience with capital started much earlier, and in a much stranger way.

    I was in class 10.

    No startup.
    No product.
    Just an Instagram page, curiosity, and a habit of talking to people on the internet.

    Networking, even before I knew the word for it, was the thing that opened most doors in my life.

    One day a friend from Instagram asked me to design a logo for his page called “Billionaire’s Academy.”

    That was my first paid project.

    Five dollars.

    I had never used Canva professionally before. I made a simple design, sent it to him, he liked the second version, and then he sent the money.

    There was only one problem.

    I was a minor.

    I didn’t even have a bank account.

    The money came into my father’s account.

    A few days later he gave me another project: build a website. I charged around $70–80. I never fully delivered that project, but he still paid me.

    I remember spending that money on clothes for my parents and my sisters.

    For someone who had never earned before, that feeling was unreal.

    That was my first encounter with capital.

    Not venture capital.

    Just the simple realization that ideas and effort could turn into money.


    The Pattern I Later Noticed

    Looking back, my life with money has followed a strange pattern.

    I make some money.

    Then everything goes to zero.

    Then I make more money than I had ever seen before.

    Then it collapses again.

    This cycle has repeated many times.

    When I was still in school, I made a few hundred rupees doing small work online. Then for six to eight months, nothing. Completely broke.

    Later I started a small agency. Made a few thousand rupees.

    Then again things slowed down.

    Then a family friend funded one of my experiments. I spent most of that money trying different ideas. When the money ran out, I had to shut everything down and return from Delhi back to Patna.

    Then I started again.

    I worked with someone briefly and made around ₹20–30k.

    Then I built a solo project that suddenly made tens of lakhs in revenue.

    I thought I had figured everything out.

    Then that collapsed too.

    Then another opportunity came.

    Then again experiments.
    Then again near bankruptcy.

    Then again another chance.

    Grants. Incubation. New projects.

    Over time I realized something about myself.

    I am not a conservative operator.

    I make money, and then I spend most of it on experiments.

    Sometimes 80% of what I earn goes back into research, ideas, and risky bets.

    It’s irrational.

    But it’s also how I learn.


    My Relationship With Money

    I grew up in a low middle class family.

    Money mattered.

    But something interesting happened when I started earning.

    My parents never treated my money as theirs.

    They never controlled it.

    They allowed me to make mistakes with it.

    That freedom shaped how I think about capital.

    Very early I saw a quote circulating online — a Bill Gates quote, though I don’t know if he ever actually said it:

    People who earn hundreds don’t think about spending tens.

    That line stuck with me.

    Instead of obsessing about saving every rupee, I became obsessed with increasing the size of the game.

    Make more.

    Build bigger things.

    Take bigger risks.

    The downside of this mindset is obvious.

    I became an extravagant spender when it came to experiments.

    But the upside is that I never developed fear around capital.

    Money became a tool for exploration, not something sacred.


    The Hard Lesson About Investors

    One of the most complicated parts of my journey was working closely with investors and collaborators on multiple projects.

    We built a lot of things.

    Clubs.
    AI tools.
    Startup programs.
    Research experiments.

    Some of them worked. Many didn’t.

    But the biggest lesson wasn’t about technology or business models.

    It was about power dynamics around capital.

    When someone invests money in you, even in exchange for equity, a subtle psychological shift often happens.

    They begin to feel like they own the company.

    And sometimes they start treating the founder as an operator rather than the builder.

    This dynamic can become dangerous if you let it happen slowly over time.

    Long conversations before investment can also create another problem: familiarity.

    If investors spend too much time around you before investing, they sometimes start to see themselves as your manager rather than your partner.

    That is not a healthy relationship.

    A founder is not an employee.

    Even when someone invests in your company.

    They are buying equity.

    They are not buying you.

    This distinction matters more than most young founders realize.


    What Most Young Founders Misunderstand About Capital

    Many young founders believe raising money is validation.

    It isn’t.

    Raising money simply means someone believes you might succeed.

    That’s it.

    Investors don’t fund the “best ideas.”

    They fund the most probable winners.

    That probability can come from many things:

    • elite universities
    • strong networks
    • previous successes
    • market momentum
    • or simply pattern recognition

    In India, if you are from a top-tier institution, doors open faster.

    In the US, if you are from Stanford or MIT, credibility comes almost automatically.

    This is uncomfortable to admit, but it is the reality of venture capital.

    VCs are not purely meritocratic systems.

    They are probability machines.


    Seeing Both Sides Of Capital

    I have been on both sides.

    As a founder needing capital.

    And as someone responsible for allocating capital.

    When you are the person giving money, something strange happens psychologically.

    You start to feel slightly superior.

    Even if logically you know the founder is the one building the company.

    Many investors start believing they know more than the founders.

    Often they don’t.

    Especially in technical fields.

    Today with AI tools like Claude, Gemini, and Copilot, many investors try coding small demos and start believing they understand the product deeply.

    They don’t.

    Building real systems is very different from generating prototypes.

    But founders also make a mistake.

    If your moat is only technology, you will eventually get crushed.

    Technology is getting commoditized very fast.

    Your real moat must exist somewhere deeper: distribution, insight, systems, or execution speed.


    My Personal Rule With Capital

    I am still learning this.

    But if I could give one rule to younger founders, it would be this:

    Don’t raise money before reality validates your idea.

    Reality means:

    • users
    • traction
    • real usage
    • ideally revenue

    Today it is easier than ever to build.

    You can ship products quickly.

    You can test ideas with real users.

    You can distribute through content and community.

    Use those tools first.

    Let traction attract capital.

    Don’t chase investors too early.


    A Strange Analogy I Sometimes Use

    Venture capital behaves a bit like social attention.

    If you chase it desperately, it runs away.

    But if you focus on becoming interesting — building things, solving real problems, shipping constantly — attention eventually comes to you.

    Investors are the same.

    Build something real.

    And eventually they start showing up.


    Final Thought

    My journey with capital has never been stable.

    It has been chaotic.

    Money comes.
    Money disappears.
    Experiments succeed.
    Experiments fail.

    But one thing has remained constant.

    Every time I lost money, I gained better questions.

    And in the long run, good questions are worth far more than early capital.

  • This Is the Mathematical Formula to Win Any Game

    This Is the Mathematical Formula to Win Any Game

    There is a very simple equation in probability theory:

    It looks like textbook math.

    It is actually a model of how progress works.

    Let’s unpack it.

    Define two variables.

    p = probability of success in a single attempt
    n = number of attempts

    The expression –

    computes the probability that you succeed at least once after n attempts.

    The term ((1-p)^n) is the probability that you fail every single time.

    So the equation simply says:

    Probability of success = 1 − probability of failing every time.

    Now look at what happens when attempts increase.

    As n grows, ((1-p)^n) shrinks toward zero.

    Which means:

    The probability of success approaches certainty.

    Not because the attempt got better.

    Not because the odds changed.

    Because you kept running the trial.


    A Simple Example

    Assume a terrible success rate.

    Let p = 0.001.

    That is a 0.1% chance of success per attempt.

    Most people would say those odds are useless.

    But run the math.

    After 100 attempts
    Success probability ≈ 9.5%

    After 1,000 attempts
    Success probability ≈ 63%

    After 5,000 attempts
    Success probability ≈ 99.3%

    The probability of success approaches certainty.

    Nothing about the attempt improved.

    Only the number of attempts increased.


    The Strategic Insight

    Success probability has two levers.

    Increase p
    Increase n

    Most people obsess over p.

    Better plan.
    More knowledge.
    Perfect strategy.

    But in many real systems, the dominant variable is n.

    Number of experiments.

    Startups work this way.
    Scientific research works this way.
    Venture capital works this way.
    Creative work works this way.

    Progress is not deterministic.

    It is probabilistic search.

    And probabilistic search rewards iteration velocity.


    Why This Matters

    Many people treat success like a single deterministic attempt.

    One company.
    One idea.
    One shot.

    That is the wrong model.

    The correct model is repeated trials.

    You are running a search process across a space of possibilities.

    Each attempt samples the space.

    The more samples you take, the higher the probability that one lands in the success region.

    This is exactly how:

    • evolutionary systems work
    • randomized algorithms work
    • scientific discovery works

    Nature does not search once.

    Nature searches millions of times.


    The Constraint

    There is one condition the equation requires.

    [p>0]

    Success must be possible.

    If the probability of success is zero, infinite attempts still fail.

    This is the only real strategic question:

    Are you playing a game where success is possible?

    If the answer is yes, the next question is simple.

    How do you maximize n?


    The Builder’s Strategy

    Good builders optimize for iteration speed.

    They design systems where attempts are:

    • fast
    • cheap
    • reversible
    • information generating

    This increases the number of trials.

    Which increases the probability of hitting success.

    Over time the system compounds.

    [1(1p)n]

    Approaches 1.


    Persistence Is Not Philosophy

    It is math.

    The equation says something very precise.

    If success has any non-zero probability, and you can attempt enough times, success becomes almost certain.

    The real skill is not predicting the correct attempt.

    The real skill is building a system where attempts never stop.

    That is how probability bends in your favor.

  • AI is No Longer a Moat

    AI is No Longer a Moat

    First principles – Value lies in outcomes, not in methods.

    When user are buying a product, they are buying the results – not the technology. Users don’t buy AI, they buy saved time, reduced friction, higher revenue, lower costs, fewer headaches, etc.

    Methods matters only to us, builders. Outcomes, matters to the real users.

    If I go a restaurant to eat sushi, I don’t care if a machine prepared it and saved the restaurant owner five bucks on each plate. I just care about the good taste of the sushi.

    Initially, it might be possible that I get attracted towards a restaurant which is the first to use a machine to make it, but over time, as it becomes more and more common, the initial hype dies.

    Thus once a method becomes ubiquitous, it stops being a differentiator.

    That’s where AI sits today.

    The Shift Happened Quietly

    A year ago, adding ‘AI-powered’ to your landing page felt like a moat. It signalled novelty. Intelligence. Future.

    Today, every product is AI-powered. The good ones, the bad ones, the exceptional ones that VCs back, all of them.

    Analytics tools, supply chain systems, healthcare autonomy, customer support, CRM, design – anything you can imagine, is AI-powered.

    The same can be seen in the trend of YC-backed startups as well –

    Source – https://www.reddit.com/r/ycombinator/comments/1fbb9m0/the_rise_of_ai_companies_in_yc/

    But the real problem is when everything claims AI, nothing differentiates.

    Users have stopped really caring.

    Some are even skeptical now. For many users, ‘AI’ just means unreliable, unfinished, hallucination-prone MVPs or another thin wrapper over ChatGPT. Instead of trust, it sometimes, creates doubts.

    The novelty phase is over. AI has become infrastructure.

    What Made This Click for Me

    When building SuperDocs, I shipped the product in two days. No heavy landing page. No elaborate positioning.

    • You land on the site.
    • You paste your GitHub repo
    • Docs get generated.

    That’s it.

    Nowhere did we say: ‘AI-powered documentation generator.’

    Instead, the message was simple: ‘Generate documentation in minutes.’

    And users didn’t ask:

    ‘Does it use AI?’

    They cared about one thing:

    ‘Does this save me time?’

    And it did.

    That was the realisation. AI wasn’t the product. The outcome was.

    Builders Are Marketing the Wrong Thing

    Most of the AI products in today’s market position themselves like –

    ‘Old solution + AI’

    Ex.-

    • Documentation tool with AI
    • Customer support with AI
    • Analytics with AI
    • Marketing tools with AI

    But if ten competitors say the same thing, no one stands out.

    Saying ‘we use AI’ is equivalent to saying:

    ‘We use databases.’

    ‘We run on cloud.’

    ‘We use APIs.’

    These are implementation details that users don’t care about.

    The real question is:

    Why you? Why now? Why better?

    Differentiator Must be Visible in Outcomes

    Real differentiation sounds like:

    • Generate docs in one click
    • Reduce support tickets by 60%
    • Cut onboarding time in half
    • Automate reports in seconds
    • Save teams 10 hours per week

    Now the user understands value immediately.

    The conversation shifts from:

    ‘What tech do you use?’

    to

    ‘What problem do you eliminate?’

    And that’s the only thing users ever cared about.

    When Should AI be Marketed?

    Okay, it’s not that should completely ditch AI from your marketing plan. There are cases where AI is still worth highlighting.

    But only when it creates a defensible advantage, not when it’s a wrapper.

    For example:

    • Custom models trained on proprietary industry data
    • Ultra-low latency inference giving real-time advantage
    • Domain-specific intelligence competitors cannot replicate
    • Unique performance benchmarks
    • Workflow intelligence unavailable elsewhere

    Example positioning:

    ‘Hotel management platform powered by models trained on millions of hotel data points.’

    Now, AI is the moat, not just the tool.

    But if you’re calling an API or building a thin layer over general models, AI is not your differentiation.

    Your product experience is.

    AI has Become Electricity

    Nobody markets products as”

    ‘Powered by electricity.’

    Electricity is assumed. Invisible. Expected.

    AI is heading in the same direction.

    Soon every tool will use AI in some capacity. The winners won’t be those shouting about it. They’ll be the ones who make it invisible.

    The best technology disappears into experience.

    What Builders Should Do Now

    Stop asking:

    ‘How do we show we use AI?’

    Start asking:

    ‘What outcome improves because of AI?’

    our messaging should translate technology into impact:

    Not:

    • AI powered workflows

    But:

    • Finish workflows in 30 seconds

    Not:

    • AI-driven automation

    But:

    • Eliminate manual work entirely

    Not:

    • Intelligent recommendations

    But:

    • Increase conversions by 20%

    Users don’t want intelligence.
    They want results.

    Final Though

    AI itself is not magical to users.

    The magic is when life becomes easier.

    If your product saves time, reduce efforts, or removes complexity, users will care. Whether AI is involved or not becomes irrelevant.

    So, the next time you write your landing page or pitch or product, try removing the words ‘AI-powered’.

    If the value still stands, you’re building the right thing.

    If it doesn’t, you’re probably selling the method, not the outcome.

    And outcomes are what endure.

  • The Modern Developer’s Dilemma: Speed Without Shared Context

    The Modern Developer’s Dilemma: Speed Without Shared Context

    AI Summary : Agent-assisted coding dramatically increases individual developer velocity, but it optimizes for local task completion rather than global system coherence. As more code is generated by agents, shared human context erodes: architectural intent becomes implicit, debugging shifts from causal reasoning to probabilistic trial-and-error, and teams lose durable mental models of their systems. This creates a structural coordination problem that compounds over time, especially as less-experienced developers ship increasingly complex systems. The real opportunity is not faster code generation, but tooling and workflows that preserve developer memory, shared context, and alignment between human reasoning and machine execution.

    Agent-assisted coding optimizes for local velocity, not global coherence. The result is codebases that no human fully understands and teams that cannot reason together effectively.

    I saw this firsthand while building Exthalpy.

    My team was small. Four developers. Initially, they barely used AI. Output was slow, but everyone understood what was being built and why. The system had friction, but it had coherence.

    I pushed the team to adopt AI-assisted coding aggressively. Velocity spiked. Features landed faster than expected. On paper, it looked like a pure win.

    In practice, context collapsed.

    One developer would ship a large change using an agent. The rest of the team would immediately lose the mental model of what had changed, how it worked, and why decisions were made. Most of the code was no longer written by humans. It was assembled by agents optimizing for task completion, not for shared understanding. Even a single day of absence was enough for the system to drift. The team could build a lot in 24 hours, but when we tried to reason about a specific behavior, nobody had durable context. The knowledge lived in private prompts and transient chats, not in shared artifacts.

    This is not a tooling problem. It is a coordination failure.

    Agentic development shifts cognition from developers into models. Locally, this is rational. You maximize throughput and reduce cognitive load. Globally, coherence erodes. Engineers stop forming deep internal representations of the system. Architectural intent becomes implicit. Interfaces drift. Invariants weaken. The codebase becomes legible primarily to machines.

    Agent-assisted coding changes how developers internalize systems. When most logic is produced through agent-assisted coding workflows, human context decays faster than teams realize.

    I saw the same pattern in my own workflow. In one project, I spent three hours debugging a Supabase issue manually. It was slow, but the context stuck. I understood the failure mode deeply. In another project, built rapidly through vibe coding with minimal documentation, that understanding evaporated within weeks. The code remained. The mental model did not.

    Speed erased memory.

    For solo developers and small teams embracing agentic workflows, this becomes a hidden bottleneck. You can move fast, but you cannot reliably compound understanding. Debugging becomes probabilistic. Collaboration becomes fragile. Scaling becomes risky because no one can reason confidently about system behavior.

    This is structural, not temporary.

    The number of inexperienced developers is increasing rapidly. Agentic tools allow them to ship systems far beyond their underlying understanding. As more software is produced this way, technical and coordination debt compound non-linearly. Better agents will increase output, but they will not automatically restore shared human context. They may accelerate its erosion.

    The core failure is that developer cognition has no durable memory layer.

    Today, context lives in scattered fragments: chat histories, local experiments, partial docs, forgotten mental notes. None of it compounds across time, projects, or teams. We have optimized heavily for generating code and almost not at all for preserving the reasoning that produced it.

    If developers could store working context in a persistent, queryable local “mind palace” — decisions, constraints, failures, architectural intent — they could compound understanding instead of leaking it. For teams, this memory layer would need to synchronize across contributors and environments, preserving shared cognition as systems evolve.

    The opportunity is not to make agents faster at writing code. It is to make humans better at retaining and transmitting understanding while machines operate at scale. The winning developer stack will optimize for coherence, memory, and alignment between human reasoning and machine execution.

    Speed without shared context is not leverage. It is latent fragility.