DevoxxGenie is a fully Java-based LLM code assistant plugin for IntelliJ IDEA, designed to integrate with local LLM providers and cloud-based LLMs. In this blog, you will learn how to get started with the plugin and get the most out of it. Enjoy!
AI coding assistants are getting more and more attention. Multiple cloud services are available and IDEs provide their own services. However, with this approach, you are tied to a specific vendor and most of the time, these services are cloud based only. What should you do when your company policy does not allow you to use these cloud-based services? Or what do you do when a new, more accurate model is released? When using IntelliJ IDEA, you can make use of the DevoxxGenie plugin, created by Stephan Janssen, founder of the Devoxx conference. The plugin allows you to choose between several cloud based and local LLMs. You are not tied to a specific cloud vendor or a specific model. This is important because every few weeks a new model is released, which outperforms previous ones.
If you rather look at a video, you can watch the 30-minute Tools-In-Action talk I gave at Devoxx Belgium 2024: DevoxxGenie: Your AI Assistant for IDEA.
At the time of writing, v0.2.22 is the most recent release. However, new features are added continuously, because Stephan uses DevoxxGenie itself to create new features for DevoxxGenie. Here is a list of notable ones:
Install the plugin in IntelliJ via the settings. The plugin is available in the JetBrains marketplace.
First, let's explain how you can get started with a cloud LLM. Anthrophic will be used, but you can choose any of the cloud LLMs as mentioned above. You need to be able to use the Anthrophic API. Therefore, navigate to the pricing page for the API and click the Start building button. In the following steps, you need to create an account and you also need to explain what you want to do with the API. Next, you need to choose a plan because you need credits in order to do something with the API. Choose the Build Plan (Tier 1). This will allow you to buy pre-paid credits.
Add your credit card details and choose how much credit you want. In order to give you some indication of what it will cost you: the DevoxxGenie plugin is about 104K tokens. The pricing at the moment of writing for using Claude 3.5 Sonnet is $3/MTok for input tokens and $15/MTok for output tokens. This means that adding the full project source code, will cost you $3 x 1.000.000 / 104.000 = $0.312. You need to add your prompt to it and the output token cost, but the cost of a prompt using the full project source code will be approximately $0.40.
Last thing to do is to create an API key and add it to the settings in the DevoxxGenie plugin.
When you do not want to spend any money using a cloud provider, you can make use of a local LLM. When using local LLMs, you must be aware that your hardware will be a limiting factor. Some limiting factors include:
In the end, it is a trade-off between quality and performance.
In order to run a model, you need an LLM provider. A good one to start with is Ollama. After installing Ollama, you need to install a model. At the moment of this article's release, Llama 3.1 is a good model to start with. Install and run the model with the following command:
Detailed information about the Ollama commands can be found at the GitHub page.
Before using DevoxxGenie, you need to first configure some settings. Navigate to the LLM Settings section:
There are several ways to add code to the context in order so that an LLM can produce better responses.
When you have selected the files needed for your prompt, you can just start typing your prompt and click the Submit the prompt button. Another way is to use one of the utility commands:
In the settings, you can change the prompts for the utility commands and you can define your own custom prompts. For example, if you often use a prompt for generating you can define autility command for your own custom prompt.
The chat memory is a very powerful feature. Very often, your prompt will not be specific enough for the LLM to generate the response you want. In that case, you can create a new prompt indicating what is wrong with the previous response. The previous prompt and response will be sent to the LLM as chat memory together with the new prompt. Do realize that when your first prompt contained the full project source code, it will be sent again in the chat memory of your second prompt. If you do not want this to happen, you need to create a new chat which will reset the chat memory.
Although you can enable Streaming Mode in the settings, it is not advised to do so. It is still in beta and copying code from the response does not take into account line breaks which is very annoying. In non-streaming mode, a copy button is available to copy code to the clipboard which does take into account line breaks.
Some practical tips and tricks:
The DevoxxGenie plugin integrates well with IntelliJ and allows you to select a cloud LLM or local LLM to suit your needs. The most powerful feature is giving you the ability to use a local LLM and model or your liking or the ability to use a cloud provider of your liking.
These two features are especially important because you may not be allowed to use a cloud provider based on your company policies and besides that, every few weeks new models are released which outperform previous models. With DevoxxGenie you can use the power of the new models right away.