
My previous article on migrating the Spring Petclinic Rest project to Helidon (see here) received a lot of positive feedback, which encouraged me to explore this area further.
The manual conversion process, while feasible, is time-consuming and requires careful attention to detail. However, it’s often more technical than creative—tedious, in other words. Automating this process would save time and reduce errors. While simple rule-based approaches (e.g., replacing <some Spring annotation> with <Helidon annotation>) can handle basic cases, they fall short for more complex conversions, such as transforming Spring REST Controllers to JAX-RS. This is where AI can help.
In this article, I’ll explain the approaches I took, the challenges I faced, and the results (spoiler alert: the results are very positive). I’ll start with a test project I created for the conversion.
Test Project for Conversion
Initially, I considered using Spring Petclinic Rest as the target project. However, I found it too large and slow to process while iteratively testing my approach. Additionally, it includes components like multiple data layers for different databases, which add unnecessary complexity and offer limited value for the converter’s development. To address this, I created a custom test project called Spring Pets.
Spring Pets is a streamlined version of Spring Petclinic Rest, where the “clinic” aspect has been removed, and the design has been simplified. Despite its smaller scale, Spring Pets retains the essential functionality required for thorough testing of the conversion process, including:
- Spring Data JPA with HSQL database
- Spring Transactions
- Bean Validation
- Layered Design with three layers
- Spring REST Controllers
- MapStruct Mappers
- Jackson JSON Binding
- Spring MockMVC-based Tests
This project provides a balanced combination of simplicity and completeness, making it an ideal candidate for evaluating the AI-driven conversion process.
Conversion Results
Before diving into the technical aspects of the conversion, I want to share the results upfront, so you don’t have to scroll to the bottom to find them. After all, that’s likely the most interesting part of the article.
I successfully converted the entire Spring Pets project (excluding tests) using the incremental approach, which I explain below. I was able to compile the project after commenting out some dependencies in pom.xml.
The scope of the conversion:
- Three Spring Data JPA interfaces were implemented using CDI and JPA.
- Three Spring REST Controllers were converted to JAX-RS resources.
- One Spring service layer bean was converted to a CDI bean.
- Spring Boot
pom.xmlwas converted to Helidon MPpom.xml. - Six JPA Entities, three MapStruct mappers, and three DTOs were converted with minimal changes.
The conversion process took about 10 minutes using the OpenAI GPT-4o model. How long would it take manually? Ten times more? Twenty? More?
It’s definitely a time-saver.
The Conversion Process
Choosing a Model
Working with AI models is similar to working with developers. If the developer is experienced, you don’t need to explain every detail—just say “go ahead and do it.” For less experienced developers, you need to provide more explanation.
Better models produce better results and require less effort in explaining the task.
After trying several models, I ultimately settled on OpenAI GPT-4o, which produced the best results. The tradeoff is that it can be expensive, as each iteration costs a few cents. As an alternative, I also used OpenAI GPT-4o-mini, which is less expensive and produces acceptable results. It’s also possible to use locally hosted models, but the ones I tried produced worse results than GPT models and required more refined prompts. However, they are free.
Now, let’s dive into the conversion approaches.
Conversion Approaches
I considered several ways to approach the task of converting a Spring project to Helidon. I looked at the project as a whole to understand its technologies, then identified Helidon equivalents for those technologies. From there, I began converting files one by one, starting with those that had no dependencies on others.
How could I make AI handle this process? I came up with three approaches:
- Contextual Conversion
“Hey, AI! Here’s my project—go ahead and convert everything. Here are some general instructions and tools to read and write files.” - Incremental Conversion
“Hey, AI! I’ll give you files one by one. First, you need to identify the file type, and then I’ll give you detailed instructions on how to convert it.” - Hybrid
“Hey, AI! I’ll give you files one by one. First, you need to identify the file type, and then I’ll give you detailed instructions on how to convert it. I’ll also provide tools to access other project files that you can use during the conversion.”
The key difference between these approaches is how much control the model has and how much project context it retains. The contextual approach assumes the model has access to the entire project context and can work with all project files in any order. The incremental approach, on the other hand, provides less context but gives the model detailed instructions for each individual file. The hybrid approach combines aspects of both the contextual and incremental approaches. While it could theoretically offer the best results, I haven’t experimented with it yet, so it’s not covered in this article.
Approach 1: Contextual Conversion
The first approach I tried was to provide the model with the entire project context and let it determine the best way to convert everything. This process involves a single prompt with general instructions and a set of callback functions to access the project.
Theoretically, this approach allows flexibility in changing the project structure and adding or removing files.
You can check out the GitHub project for the converter here.
At first, it worked well, and my initial reaction was, “Wow!” However, as with many things, the devil is in the details. There were small errors, and fixing them made the prompt increasingly larger. Eventually, I realized that I was detailing the conversion process for specific file types, which led me to consider a more effective approach: processing files individually.
This approach is resource-intensive. With GPT-4o, each conversion attempt costs about 15 cents, and tokens per minute (TPM) limits are often exceeded, requiring timeouts. This was one of the main reasons I created the smaller Spring Pets project for testing.
I also struggled to make this work with locally hosted models, despite trying several options without success.
Pros:
- Provides a comprehensive understanding of the project context.
- Allows for changes in project structure.
- Supports the addition and removal of project files.
Cons:
- High resource consumption.
- Long processing times.
- Local models are insufficient for handling this complexity.
My conclusion is that GPT-4 is not yet sophisticated enough for this approach, but as AI models continue to evolve, this method may become more viable in the future.
Approach 2: Incremental Conversion
In this approach, each file is processed independently, without retaining context across files. It involves two steps:
- Identify the file type.
- Use a prompt with conversion instructions for that file type and pass it to the model along with the file.
I developed a converter based on this method. You can explore it on GitHub.
Initially, I considered building a dependency graph and using topological sorting to create a list of files, starting with those that had no dependencies. For each file, I used JavaParser to analyze and detect its type e.g., Rest controller, Spring Data repository etc. Ultimately, I switched to using AI for file-type identification, as it was simpler and more effective. Using a prompt, I asked the model to return a file type string based on the following criteria:
Classify the file based on the following criteria:
1. POM_XML: File name is `pom.xml`.
2. REST_CONTROLLER: Java class annotated with @RestController.
3. REPOSITORY: Java interface or class annotated with @RepositoryDefinition or extending Repository/CrudRepository.
4. ENTITY: Java class annotated with @Entity, @MappedSuperclass, @Table, or @Id.
5. SPRING_BEAN: Java class annotated with @Service, @Component, @Bean, or @Configuration.
6. PLAIN_JAVA: Java class, interface, or record that doesn't meet any other criteria.
7. UNIDENTIFIED: File is not a Java file or doesn't match any of the above criteria.
Output only the classification (one of the above).
This approach works well with different models, with minimal tuning required. Since the classification logic is externalized, it can be updated without recompiling the converter.
After identifying the file type, the converter reads a corresponding prompt (e.g., prompt_<file-type>.txt) and passes it to the model along with the file to convert, saving the result to disk.
This method works well with locally hosted models. I tested it with Qwen Coder in LM Studio. While functional, the results are not as good as those from OpenAI’s GPT models.
Pros:
- Lower resource requirements.
- Compatible with local models.
- Good conversion results.
Cons:
- Inability to add or delete files.
- Intensive prompts.
In conclusion, even without passing the entire project context, this approach outperforms the contextual one. It is more cost-effective, delivers better results, and is easier to customize. However, as models advance, the contextual approach may eventually become the preferred method.
Approach 3: Hybrid Approach
The Hybrid Approach combines elements of both the contextual and incremental approaches. I envision it as an incremental process with tools to provide project context. Although I have not experimented with this approach yet, it should theoretically combine the benefits and challenges of the other methods. While it could offer more versatility, it would also be more resource-intensive, expensive, and require advanced models.
Challenges
Working with AI is both fascinating and challenging. Unlike traditional programming, where the outcome depends on your development skills, AI-driven projects also require the ability to communicate effectively with the model and define tasks clearly. Writing prompts that the model can understand is essential.
One key to success is crafting precise, direct, and non-overlapping instructions in your prompts. However, things can get tricky. For example, I encountered an issue where the converted Java file was returned inside a Markdown formatting block, even though I had explicitly instructed the model to return only the raw file. I solved this by specifying the exact formatting string that should not be present. This felt like extra overhead.
Models are not deterministic. They may return different results for the same prompt and file, although this does not happen often. In most cases (95%), the conversion is great, but in the remaining 5%, issues like the unwanted Markdown formatting appear. This can be minimized or fixed by refining your prompts, though it is often difficult to pinpoint exactly how. Experimentation is key.
Prompts are not universal. Different models require different approaches. More advanced models need fewer explanations, while less sophisticated ones require more detailed instructions. It’s like working with developers at different skill levels.
Advanced models are like senior developers and therefore expensive. While they produce better results with less effort, they come at a higher cost. For instance, one attempt to convert Spring Pets using the contextual approach with GPT-4o costs about 15 cents. While not prohibitive, the costs add up with multiple runs. I ran the converter over 100 times, and processing larger projects will be even more expensive. Moreover, there are limits. I frequently reached the tokens per minute (TPM) limit, requiring timeouts. Paying more to OpenAI can lift these limits, but if you want the best results, be prepared to pay more.
Finally, legal considerations are also important. Always read the licensing agreements for the models you use to avoid potential legal issues. Some models may prohibit commercial use, and hosted models may process your data in ways that could violate intellectual property rights. Before using AI in commercial projects, always consult with legal experts.
Conclusion
AI is undoubtedly becoming a core part of our lives, transforming how we approach programming. The once-complex task of converting a project from one framework to another is now much easier with the help of AI, saving time and resources, making it highly cost-effective.
However, such converters are not limited to just Spring-to-Helidon conversions. Given the appropriate set of prompts, they can also assist in converting and migrating older javax.*-based Java EE projects to modern Jakarta EE.
I plan to continue improving the converters. As always, I welcome your feedback, whether in comments or on social media.
Here are the links to the GitHub projects mentioned in this article:
- Spring Pets – A test project used for conversion testing
- Contextual Converter
- Incremental Converter
Thank you!