<aside> 💡 TinyGen is an LLM Agent that is able to suggest code changes for you in a GitHub Repository of your choice!
Whether it's feature additions, or fixing a bug, or just giving you some guidance on how to implement something, TinyGen can assist you with your coding needs!
Try it out ⇒ https://tiny-gen.streamlit.app/
</aside>
https://www.youtube.com/watch?v=aAXo5XXDPgE
There are two inputs to TinyGen:
repoUrl
prompt
repoUrl: The public link of the GitHub repository to process
prompt: any change/feature request that you want in the codebase
Given these inputs, TinyGen first loads all of the files from the Github repository to be processed. Currently, this processing is done in memory. Thus, once the response from TinyGen is received, the files are gone and is not stored in a database.
After the files are processed, it is then sent to TinyGen 1.0 or TinyGen 2.0 as per the user’s choice for the Agent.
TinyGen 1.0 is a 3-component agent
TinyGen 1.0 is able to process small repos and suggest code changes only on ONE file.
TinyGen 2.0 is a 5-component agent
TinyGen 2.0 can process large or small repos and can suggest code changes on one or MORE files. Although the latter may be able to handle more bandwidth, it does come at a cost.
For TinyGen 1.0, it is assumed that the entire repository fits in the context of the LLM (in this case GPT-4 Turbo with a 128k token limit). For TinyGen 2.0, to by pass the token limit, it first generates summaries of each individual file in the repository using GPT-4-Turbo-Preview. For example, if there are 125 files in the repository with total 150k context, GPT-4 first writes a summary for each file. This file summary chain is batch called meaning that 125 calls to GPT-4-Turbo-Preview will be made in parallel. Once these summaries are generated, it is then passed into the context of GPT-4 Turbo for use in further chains. The total overall summary generated is assumed to fit in the context window of 128k tokens.
As you can see, generating summaries of 125 files or even more can become very costly!!$$
Some further comparisons between the two Agents are shown below: