Tech Insights (EN)

AI-Accelerated Innovation: The Next Evolution of Software Engineering

ai accelerator

Many have answered the call to action in response to generative AI (GenAI)’s debut on the global stage. Since OpenAI brought ChatGPT to market in November 2022, there have been monumental developments — like the highs of passing the bar exam and lows of citing fake cases in court. On top of that, more public and private LLMs have made their way into the GenAI arena, revealing many opportunities for enterprises.

As such, there’s no shortage of opinions around productivity metrics — some even as significant as 50+% productivity improvements across the knowledge worker’s job functions. Software engineering is certainly no exception. In fact, it’s often cited as one of the most impacted job groups.

Think back for a moment on how DevOps and CI/CD changed the way software teams deliver products. It was an enormous transformation, but one that — in theory — has become the norm in many scenarios. In a similar way, AI now has the potential to again transform how we create, run and support software.

To better understand GenAI’s potential impact, we’ve been running experiments in our GenAI X Hub, producing well over 4,000 business and technology use cases across the entire software development lifecycle (SDLC). Everyone and anyone in our network willing to experiment with GenAI — from our industry/domain experts to product managers and architects to developers to quality and systems engineers to customer teams — were part of this initiative.

In our research and experimentation, we’ve found the market’s productivity improvement research to be accurate, predominantly for individual daily tasks within the SDLC, such as helping formulate user stories, providing recommendations, helping with scaffolding and corrections, and performing consistent application of patterns.

So, we began to explore the next evolution of GenAI-enabled productivity across the SDLC, asking ourselves: How can GenAI drive value in the overall process from these optimized individual tasks?

The points we examine in this blog attest to the rapidly evolving and highly iterative nature of GenAI and, to make it abundantly clear, we should continue experimenting with GenAI across the SDLC to continuously learn from its practical applications.

Let’s unpack the ways in which we can leverage GenAI for software engineering and development now and in the future…

Experimenting with GenAI Across the SDLC

It’s true that there’s meaningful proof that GenAI can be used by people across the SDLC to increase velocity and quality of artifact creation. One example is „coding” tasks (as opposed to true R&D work), such as code generation for a well-structured, well-defined user story. Another example is a service desk with standard operating procedures, where a request comes in and the agent can both process and complete this request while the next service request is in queue. This model resembles a FIFO queue with discrete and independent tasks, which means that 20% optimization in individual service desk tasks will result in close to 20% productivity improvement for the overall function.

Keep in mind however that the SDLC is a highly non-deterministic process. Just think: How many software products or enterprise software platforms have you delivered where the process has been simplistic or deterministic? As you experiment with and implement GenAI for these quick-win use cases, you’ll need to concurrently train your toolset to better handle the sheer number of interactions and interdependencies — both between humans and systems — so that you can drive more holistic, long-term productivity.

To further expand on these complexities in the SDLC, there are interdependencies between tasks, team members, teams, an extended group of contributors, decision makers, and technical dependencies on other system components across the SDLC for complex software implementations. And let’s not forget about end users … they also have an opinion or two from time to time.

And — while modern engineering practices elevate many system-level dependencies through decoupling and modular designs — the overall process is still heavily influenced by human factors, evolving requirements, unpredictable errors and complex business logic resulting in intra- and inter-dependencies.

On top of this, there’s the enormous challenge of measuring productivity and quality after you’ve successfully implemented GenAI and having enough productivity history to properly evaluate GenAI’s impact.

We’ve recently explored GenAI limitations in handling complex tasks and determined that multi-step, multi-nodal tasks will require an ecosystem of AI models to drive real, incremental value. To start, you’ll need to ensure that you have an optimal data framework, that you properly maintain and manage your data, and that you implement a hybrid architecture that leverages a single generalized model to orchestrate tasks.

Shifting Toward a Balanced AI-Enabled Value Stream

Once you overcome the challenges and blockers in the GenAI-enabled value stream and have a highly optimized SDLC in which GenAI truly improves software development productivity, you’ll need to shift beyond productivity as a sole indicator of GenAI impact on software development.

For example, if a product feature is wrongly defined or the wrong feature to have — with overweight focus on productivity — you’ll rapidly develop the wrong software.

While GenAI-enabled solutions are currently valuable for answering simple, individual tasks, current software engineering processes infused with GenAI aren’t yet able to address the challenges of aligning customer and business problems with the proper solutions.

Functional GenAI capabilities that enable real business value — like next-generation conversational search — will require significant experimentation and large number of cycles, and likely multivariate testing in production, to “make it right.” This is where proper engineering practices and increased productivity will greatly accelerate the velocity of such experimentation.

On its own, and with value stream optimization, GenAI can aid the “fail fast” model. But perhaps we could leverage GenAI to shift toward a “succeed fast” model.

For example, we’re currently implementing capabilities for code migration where we’ve started with converting source code into target code, in which GenAI is indeed useful … But is the resulting code really eliminating technical debt or simply dressing technical debt in a new language?

Instead of converting code to code, we’ve been exploring the viability of converting source code to requirements and enabling real improvements in a target system. In doing so, business analysts, product managers and other relevant roles can understand the functionality of old and new features and start contributing feature improvements. From there, we can continuously generate target code based on evolving requirements where engineers productize and validate code and functionality.

Essentially, organizations should be focusing efforts in two directions: doing things right (in a productive way) and doing the right things. The real challenge is translating the end-to-end user journey to a proper set of functional and technical implementations. By applying GenAI early in the SDLC, at the product definition stages, and continuously applying it throughout the process to validate early assumptions versus actual user behavior, we can increase functional fidelity of the resulting software.

Achieving Immediate Value & Scaling for Long-Term Success

Considering all the above, we ask ourselves: Can we leverage GenAI to create a set of approaches, capabilities and tools to qualitatively and quantitatively improve innovation?

In the long term, quite a bit is still unknown, but we are certain that the above building blocks will need to coalesce into a reimagined SDLC process from the one we are accustomed to today.

Will it require specialized models, or a mixture of experts? Are we comfortable with a short-memory chain of thought approach, or will we need to evolve to a tree of thought with broad signal aggregation and persistent, long-term memory?

In the immediate term, our focus is on optimizing the SDLC building blocks. For example:

  • Continuously assessing and comparing existing project plans against project plans generated with GenAI.
  • Closing the loop of software product definition, strategy, requirements and its translation into an evolving backlog.
  • Translating user stories into fully functional code by collaborating with architects, BAs, developers and GenAI, and then continuously retranslating changing requirements into code, enabling “custom” no-code.
  • Enabling GenAI DevOps with continuous automation on top of dynamic CI/CD frameworks.

Another key focus in the immediate term involves change management among individuals and teams at the task and tool levels, even before we get into mindset and process. Gradually adopting GenAI and ensuring that it is effective and consistently used is extremely important and must be managed and measured to achieve meaningful mid-term results. More holistically, we should be addressing the following:

  • Establishing security guidelines and rules of engagement with AI so your team feels empowered to explore and you don’t expose your company to additional risk.
  • Implementing quick-win use cases now where you can, such as for code generation, task automation and artifact/issue analysis.
  • Closely collaborating with your teams and encouraging individuals to openly share what’s working and not working.
  • Creating working groups to focus on understanding where your company could continuously leverage and evolve AI within your organization.

As we continue this journey, we need to remind ourselves about the promise GenAI has to offer and embrace each milestone. Envision GenAI’s role in the SDLC as eventually evolving into:

  • An accurate validator of inputs, assumptions and constraints going into the process.
  • The ultimate codifier of “engineering norms,” enabling consistent application of leading engineering practices through the entire product lifecycle.
  • A truly collaborative ‘extreme programming’ expert-expert pair.
  • A skills assessor and trainer.
  • An experience advisor for everyone involved in the process.

Time will tell. Human experts within the SDLC are not going anywhere. If anything, they will need to become evolved experts who can responsibly and securely harness GenAI.

Stay tuned for more insights around several of the topics that we touched on within this post. We hope you will join us on this journey.

This article was originally published on epam.com/insights. The article’s main photo is from envato.com.

Explore more


The authors of this article are:

  • Eli Feldman. CTO, Advanced Technolog, EPAM.
  • Sam Rehman. SVP, Chief Information Security Officer, EPAM.
  • Adam Auerbach. VP, Head of Cloud Agility & Testing, EPAM.

Redaktor naczelny w Just Geek IT

Od pięciu lat rozwija jeden z największych polskich portali contentowych dot. branży IT. Jest autorem formatu devdebat, w którym zderza opinie kilku ekspertów na temat wybranego zagadnienia. Od 10 lat pracuje zdalnie.

Podobne artykuły