20 comments

  • Izkata 1 day ago
    There was a small surge in popularity in distributed git issue trackers a bit over a decade ago, and all of them had some sort of problem baked in to the design that made them not very good.

    Two weeks ago I had listed out the problems I could remember offhand: https://news.ycombinator.com/item?id=47956979

    It sounds like there's intentionally no attempt to handle the last one (that this is by devs for devs), and points 3 and 4 might be addressed somehow since it mentions syncing automatically. Does it store data separate from git to avoid the first two?

    • jolaflow 1 day ago
      Thanks for input. Interesting list. A few notes on that:

      - Issue state is not tied to commits in the checked out repo. Events live in append-only user-scoped logs and are materialized independently of the checked out branch, so switching branches does not change issue state. This is solved with git worktrees.

      - Epiq keeps state in a dedicated state branch and does not put issue data into normal code history. The working branch stays clean.

      - Sync uses normal git push/pull semantics.

      - Multi-user conflicts are prevented because each user writes only to their own immutable event log file. You never co edit a file. Logs converge state in memory from the combined event stream. There’s no shared mutable issue document being edited.

      - The non-developer distribution can be addressed with exported state .md files (with the board as ascii). They are currently not generated automatically, but you can generate them at will. [edit - addition: Considerable effort has also been put into making the tool accessible to non-technical people, so there is auto completion, hints, a command palette with descriptions of each command, arrow key navigation and so on. It is my hope that anyone can pick it up rapidly. And a web interface could definitely be crafted for that usecase]

      • cxr 1 day ago
        You don't need to put it on the Web to be able to leverage the World Wide Wruntime.

        Epiq looks to be written in TypeScript and distributed as JS via NPM. You know what excels at executing JS? The browser.

        If you want to actually address the usability problems—then create a CONTRIBUTING.html—linked from the README, that users are instructed to double-click to open (i.e. launch in the browser on any sanely configured system). From there, they can/should be able to load the project either by pointing to it with a filepicker-based workflow that's the same as VSCode's "Open Folder…" workflow, or by dragging and dropping the source tree into the browser window. If you do it right, then this should immediately present them with a browser-based UI for poring over and interacting with all the Epiq data in the repo—down to the Git commands to execute to integrate changes into the Epiq "database".

        It's beyond baffling that so many programmers who are nominally JS developers thumb their noses at writing standards-compliant code and instead insist on coding directly against Node's proprietary APIs.

        • mentalgear 21 hours ago
          Indeed, I would use this in a browser (sandboxed) but in these times of exponential supply chain attacks, I'm not `npm install` anything outside a VM, and definitely not globally.

          The Browser with its sandbox hardened for the internet is the way to go for any future personal/dx tools that were previously node only.

        • hiccuphippo 17 hours ago
          How can the browser execute git commands from opening a local html file? Maybe if you give the file a different extension and configure an application to run a webserver and open the default browser when the file is double-clicked?
          • cxr 8 hours ago
            > How can the browser execute git commands from opening a local html file?

            It can't. The CONTRIBUTING.html shell would spit out a file and tell the user what Git commands need to be run—just like project READMEs (or landing pages like jekyllrb.com) show which commands will install the tool.

        • locknitpicker 21 hours ago
          > It's beyond baffling that so many programmers who are nominally JS developers thumb their noses at writing standards-compliant code and instead insist on coding directly against Node's proprietary APIs.

          You're talking about node.js projects running on node.js, and you're complaining that it consumes node.js APIs. Strange.

          • cxr 17 hours ago
            There is no argument or insight in your comment. It's physically possible to type in code that makes direct use of non-standard APIs that work in NodeJS but not the browser. Pointing out that this is so and that there are people who do it is not the same as engaging with the subject of whether they ought not to—which was the point of the remarks you responded to. Previously:

            > You're offering a retort to someone who is communicating their position that you ought not do something, where the retort consists of nothing more than explaining that people are doing it. Yes, clearly. But what the person you're responding to is arguing is that you ought not do it.¶ Consider[…]:

            > Person A: Here's little advice: don't take up smoking. Smoking is bad for you.

            > Person B: Yet people smoke

            <https://news.ycombinator.com/item?id=38712699>

            • locknitpicker 17 hours ago
              > There is no argument or insight in your comment. It's physically possible to type in code that makes direct use of non-standard APIs that work in NodeJS but not the browser.

              You're clearly trying too hard to not understand the issue you're commenting on. For starters, you're purposely ignoring the fact that Node.js is the runtime. Not JavaScript, Node.js. The project is a command line app running on Node.js. It needs to parse command line arguments. It needs to execute commands. It quite possibly needs to access the local file system. It's Node.js, not something that runs on Chrome. This is the very basics. If you do not understand this, you can't even know what JavaScript is. So why are you acting like a white knight for a technology you don't even understand?

              • cxr 16 hours ago
                I am one of ~3 people primarily responsible for the JS Reference as it appeared/appears on developer.mozilla.org since before NodeJS (or V8) ever existed. I "know what JavaScript is".
                • hypfer 14 hours ago
                  Check the comment history. You're currently being trolled.

                  Also, thank you for your work.

    • tim-projects 1 day ago
      I am writing a tool with git tracking. Here's how I tackled these. I store the issues in a work tree inside the git repo called .tasks

      This work tree is not included as part of the standard repo

      Then I have two commands, a save and restore. These commands create a remote branch inside the git repo called tasks and updates it with the contents of the .tasks work tree. This remote branch only contains tasks no normal code.

      Restore takes the contents of the remote branch and downloads it back to the locally created .tasks work tree

      Save and restore are manual processes, but the tool I wrote triggers a save whenever a merge to main occurs.

    • captn3m0 17 hours ago
      Isn’t splitting code and meta into two repos the same solution here? Like how GitHub tracks your Wiki in a separate repo (which you could repurpose for your issues, even).
      • Izkata 12 hours ago
        The ones in those first two bullets made issues part of the repo so you'd clone/push/pull updates automatically with the normal git commands. They were trying to reduce friction with usage: Fewer new commands to remember/use, local data so the commands were instant like "git commit" or "git show" are, and automatic syncing but only when the user was already doing it so there wouldn't be unexpected hangs if the remote was inaccessible for some reason. Putting them in the same repo also meant since they came along with every normal git clone, every repo had a copy regardless of if a specific user had the tool or not, so switching between hosts would never be an issue and would automatically be supported without having to update however the two repos were linked.

        The one that tied issues to specific commits I think even portrayed the different-states-in-different-branches as a feature, that for example you could easily tell at a glance whether a bug had been fixed on the branch you're on or not. This was also the era when people were figuring out complex branching strategies like gitflow, where that would be a reasonable thing to be uncertain of.

        Like I said the problems were part of the design, not incidental, the tradeoffs just ended up not what people wanted.

        Also something else I didn't mention before, all of these were command-line, not TUI. I have no idea how that would've changed the result. For example I could imagine automatic background syncing actually being reasonable, sidestepping some of the issues the command-line ones had to work around.

    • crabbone 19 hours ago
      Out of things listed there, I see some as positives rather than negatives, but, specifically, wanted to reflect on:

      > No non-developer UI for project managers to see or comment on issues.

      Strategically, I'd prefer this over anything that offers non-developer UI. My experience with any tool that offers non-developer UI for developer-related activities was overall infuriatingly negative (think Jenkins, JIRA, Github and the likes etc.) Because these UIs will usually expose the underlying functionality in a bad way that will create pathologically bad practices that will require the developers to accommodate the lowest common denominator.

      Here's one example: Github or GitLab PR management interface. Before this became "standard practice", PRs used to be deal with from the interface to Git chosen by the developer. It allowed more freedom of editing and communication, but, most importantly, it didn't lock the developers into a few selected choices of reconciling the new changes with the existing code.

      GitLab, for example, doesn't even offer the only good way to do that: there's no way, using GitLab interface to rebase the suggested changes on the target branch. All the options it offers in UI are wrong. And yet, companies, like the one I work for, make it a corporate policy to work exclusively through this garbage UI because they make their OPs / IT teams design workflows around it.

      As an aside: if a project manager cannot use Git, they shouldn't be a project manager. There are some job requirements that one must meet in order to hold a job, and using the most popular VCS should be one of them. This is just as true as it is true for developers: if they don't know / can't use Git, they shouldn't be in that role. The manager's ineptitude shouldn't be an excuse to make / adopt cruddy software.

  • goyozi 1 day ago
    You have my upvote because I love Git-based apps. There’s something cool about Git being an effective database with loads of free hosting options.

    I’d (re)consider a couple of things if you intend to work on it and make it viable for a wider audience.

    1. Who is it aimed for? If product managers and designers _are_ in scope e.g. you imagine full engineering teams using it, then a TUI isn’t gonna cut it. It’s a great interface choice for devs but I don’t think it’s organizationally viable to force everyone else in the terminal.

    2. I’d think about either having a central issues repository as a default / recommended option or creating an easy way for linking issues together across repos. To me, as appealing as it sounds to have your code and issues together, these things often evolve at a different pace. If I want to edit an issue I’m working on to add some new info or address changing requirements, I almost definitely don’t want to commit and push it with my local WIP version of the code.

    • jolaflow 19 hours ago
      Thanks for the upvote!

      Let me address these:

      1. I envision people already comfortable with the terminal as the first to pick it up, but as someone else pointed out, a considerable amount of effort has gone into lowering the threshold and making it usable for people with limited command-line experience. I can also definitely see a future web UI being added over time.

      2. There is no requirement to run Epiq alongside your codebase - you could also use a separate repository dedicated to issue tracking. When you run epiq, it traverses upward until it finds the nearest project definition. In theory, you could initialize Epiq at a higher-level scope and have multiple child repositories share the same board state, although that setup has not been officially verified or supported yet, so I would wait until there is a version that explicitly supports it.

    • mentalgear 20 hours ago
      Quick note The TUI is so well made, it seems easily useable by anyone who seriously wants to.
      • jolaflow 19 hours ago
        Thanks for the encouraging input!
  • joeblubaugh 1 day ago
    It’s very slick, but I would be interested to know how separable the UI and the data layer are. I love vim but asking a collaborative group to all use a TUI is difficult. A local web server would be a nice alternative UI
    • jolaflow 1 day ago
      Good point. A local web UI is probably one of the highest priorities from here. The UI, persistence and materialization layers are already fairly separated architecturally.

      The current TUI is built with Ink, which is a React renderer for the terminal, so conceptually the UI structure already maps naturally to the web.

  • dmos62 16 hours ago
    @jolaflow looks great. What motivated you to build this? If you wanted TUI issue tracking and CLI/MCP interfaces, I imagine there's already a lot of tools for that.

    To be clear, I'm really into this. I'm using a custom git-based agent and this is a viable replacement for its issue tracking.

    • jolaflow 12 hours ago
      Thanks for the kind feedback! My initial motivation was of lack ergonomics in the tools I used. I had a vision of what I wished issue tracking was like, and the tools we were using were nothing like that. In my opinion they were hard to navigate, slow, unreliable and prompted you to login every once in a while. I thought I’d be able to make something useful in a weekend. It took me a year of on and off coding, but its been a great journey!
  • nextaccountic 1 day ago
    > Agent interactions

    > The MCP server lets AI tools interact with Epiq in a predictable way.

    Or maybe just publish a skill for the agent to use your CLI? The agent alredy uses CLI commands to interact with git itself

    • jolaflow 20 hours ago
      It's mainly about robustness and deterministic outcomes.

      There is a small level of noise in TUI output, and structures that are easily parsed by a human can be ambiguous for an agent (for instance column layouts). You could definitely let agents interact with Epiq purely through the CLI, but the idea behind the MCP server is to provide stable, predictable interfaces where determinism matters.

    • locknitpicker 21 hours ago
      > Or maybe just publish a skill for the agent to use your CLI?

      This. Skills effectively turned MCPs obsolete in the vast majority of MCP applications. A single CLI tool implemented following a progressive disclosure style doesn't even need a agent skill for coding agents to use it effectively.

  • eterps 19 hours ago
    > Conflict handling model: Later events take precedence when conflicts occur

    Do I understand correctly that if 2 people add a lot of information to one issue only one of them 'wins' and becomes visible? Or is it more subtle?

    If only the latter one becomes visible, how do you get to the edits of the other person and 'merge' it again?

    • jolaflow 19 hours ago
      That is a known limitation as of now. Text updates are currently handled as whole chunks, so Epiq does not implement character-level CRDT merging.

      In the event of conflicting updates to the same text block (currently title or description fields), later events take precedence.

      What you can do is use commands like ":peek prev" (takes you to the previous edit), ":peek 1h", or other time-travel commands to inspect previous states and manually recover overwritten changes if needed.

      • eterps 17 hours ago
        > What you can do is use commands like ":peek prev" (takes you to the previous edit), ":peek 1h", or other time-travel commands to inspect previous states and manually recover overwritten changes if needed.

        Thanks, I think that would work fine in most cases if you can open your editor with the 'prev' version and the current version in 2 panes (or in diff mode).

  • SidVikJay 22 hours ago
    Really elegant solution to the distributed issue state problem. Using user-scoped immutable event logs to prevent git conflicts is a clever architectural choice. Congrats on the launch!
    • jolaflow 20 hours ago
      Thank you for the kind words!
  • targetbridge 17 hours ago
    As a web developer, the local web server UI idea sounds like the natural next step. Since you're already using Ink/React for the TUI, the component model should map well to a browser-based UI. The git-as-database approach is elegant — curious to see how it evolves.
    • jolaflow 11 hours ago
      Thank you! I too am excited about next steps. There are a lot of interesting paths to explore still, and I do agree that the web interface would be a natural evolvement.
  • eddy-sekorti 17 hours ago
    Looks good, i will try this out
    • jolaflow 11 hours ago
      Thanks! Id be happy to hear what you think after trying it out!
      • eddy-sekorti 11 hours ago
        Yes, sure, i have sent it to my developer and will give you feedback, noticed onething, https://ljtn.github.io/epiq/ since it will also be used in enterprises, you need to show them you are secure and compliant, you can setup a free trust center via sekorti dot com it takes less then 30 mins to be enterprise ready. Let me know if you are intetested and i will upgrade your account for free.
  • samuell 1 day ago
    I think this is a cool project. I see a lot of use cases for this, for cases where it is preferable to keep issues local to the repo, distributed via git only, and not the least for all kinds of personal task management. Avoiding the context switching to a web based tool is a nice plus.
  • joshka 1 day ago
    I really like the idea of a distributed issue tracker. I'd encourage you to look at git-bug if you haven't already done so for some prior art / inspiration / hard learned lessons.
  • swoorup 1 day ago
    I would prefer a single binary and skill over mcp.
    • jolaflow 20 hours ago
      They are not mutually exclusive. The MCP layer is there for deterministic outcomes, because there are times and places where that matters a lot.
  • gauravs19 17 hours ago
    Excellent concept
    • jolaflow 11 hours ago
      Thank you for the kind words!
  • aniceperson 17 hours ago
    cool Project but global npm install is a no for me, and there seem to be no similar to python venvs in node.js ecosystem.
  • quantummagic 17 hours ago
    Obviously you should be free to develop anything you want, using any tools you want. But the installation instructions featuring "npm" as the first step, means a hard pass for me. We have to stop building on quicksand.
  • pauxel 9 hours ago
    [dead]
  • xiaosong001 22 hours ago
    [flagged]
  • mashijian 20 hours ago
    [flagged]
  • kent-tokyo 1 day ago
    [flagged]
  • lanycrost 1 day ago
    [flagged]