6 comments

  • hasyimibhar 2 minutes ago
    How does this compare to open source Deepnote[0]? We use the hosted version at my previous company to replace self-hosted Jupyter notebooks, and it's pretty great.

    [0] https://github.com/deepnote/deepnote

  • MSaiRam10 19 minutes ago
    Notebooks as the output format is funny because notebooks are famously bad for reproducibility. Out of order execution, hidden state, etc. You're solving "chat isn't reproducible" with a format that also isn't really
  • 2ndorderthought 4 hours ago
    This is one of those product areas I would call high-risk without a human in the loop. So I am glad you kept a person in the loop. It's really easy to lose tons of money making decisions based on bad statistics or models. Anyone remember how much money zillow lost because of automatic time series models?

    I do have concerns about the workflow. Data people aren't usually the best programmers. Models hallucinate and make mistakes sometimes subtle sometimes not. Can you think of a way to prevent data scientists from having to be expert code reviewers? I feel like taking away the code gives them the chance to find and fix mistakes in their reasoning but I have no evidence for that.

  • jiggunjer 38 minutes ago
    IME "real data work" doesn't involve notebooks.
  • amirathi 2 hours ago
    Really cool. If somebody doesn't want to adopt a new platform, take a look at open source Jupyter MCP Server[1]. Once integrated with Claude, it can execute code on the live notebook kernel.

    I just let Claude write notebooks, run top to bottom, debug & fix errors & only ping me when everything is working.

    [1] https://github.com/datalayer/jupyter-mcp-server

  • estetlinus 2 hours ago
    This is one shot with Claude Code. What’s the moat?
    • 2ndorderthought 59 minutes ago
      Not the op or affiliated but.

      You really shouldn't and often cannot legally send off data or information about data to 3rd parties. Maybe schemas are okay but 1 mistake and your company can be in serious trouble. So local models is a good idea.

      This is a safer workflow if implemented correctly to prevent certain types of mistakes when LLMs inevitably hallucinate or make a mistake.

      That said, 200 usd? I don't believe the value is there. Someone can run a local model very easily, 1 command line call and do this themselves. For free.