>

Digital Frontier 2023: Next-Gen Software Unveiled

Digital Frontier 2023: Next-Gen Software Unveiled

 

Digital Frontier 2023: Next-Gen Software Unveiled
Digital Frontier 2023: Next-Gen Software Unveiled

 

Since the release of ChatGPT in November 2022, there have been countless conversations about the impact of similar large language models. Generative AI has forced organizations to rethink how they operate and what can and should be modified. Specifically, organizations are considering the impact of generative artificial intelligence on software development. While the potential of generative AI in software development is exciting, there are still risks and barriers to consider.

Members of VMware’s Tanzu Vanguard community, who are industry experts at companies, provided their views on how technologies such as generative AI are impacting software development and technology decisions. Their insights help answer questions and raise new questions for companies to consider when evaluating their AI investments.

 

AI will not replace developers

Generative AI has introduced a level of software development speed that did not exist before. It helps increase developer productivity and efficiency by helping developers shorten building code. Solutions like the ChatGPT chatbot, along with tools like Github Co-Pilot, can help developers focus on generating value instead of writing boilerplate code. By acting as a multiplier effect on developer productivity, it opens up new possibilities in what developers can do with the time saved. However, despite its intelligence and benefits for pipeline automation, this technology is still far from replacing human developers.

Generative AI should not be considered to be able to work independently and still needs to be supervised – both in terms of ensuring code is correct and in terms of security. Developers still need to be able to understand the context and meaning of the AI’s answers because sometimes they are not quite right, says Thomas Rudrof, DevOps Engineer at DATEV eG. Rudrof believes that artificial intelligence is better for helping with simple, repetitive tasks, acting as an assistant rather than replacing the role of a developer.

 

Risks of AI in software development

Despite Generative AI’s ability to make developers more efficient, it’s not flawless. Finding bugs and fixing them can be more challenging with AI because developers still need to carefully review any code that AI creates. There is also more risk associated with the software development itself, as it follows the logic defined by someone as well as the available data set, says Lukasz Piotrowski, developer at Atos Global Services. The technology will therefore only be as good as the quality of the data provided.

On an individual level, AI creates security challenges as attackers will seek to exploit the capabilities of AI tools, while security professionals also use the same technology to defend against such attacks. Developers must be extremely careful to follow best practices and not include credentials and tokens directly in their code. Anything secure or containing an IP that can be revealed to other users should not be uploaded. Even with safeguards in place, AI may be able to breach security. If the adoption process is not taken care of, there can be huge risks if this security scheme or other information is inadvertently moved into generative AI, says Jim Kohl, Devops Consultant at GAIG.

 

Best practices and education

Currently, there are no established best practices for using artificial intelligence in software development. The use of AI-generated code is still in an experimental phase for many organizations due to numerous uncertainties such as its impact on security, data privacy, copyright and more.

However, organizations that are already using AI need to use it wisely and should not trust the technology freely. Juergen Sussner, Lead Cloud Platform Engineer at DATEV eG, advises organizations to try to implement small use cases and test them well, if they work, scale them, if not, try another use case. Through small experiments, organizations can self-determine the risks and limitations of the technology.

Guardrails are essential when it comes to the use of artificial intelligence and can help individuals use the technology effectively and safely. Unaddressed use of AI in your organization can lead to security, ethical and legal issues. Some companies have already seen severe penalties for AI tools used for research and coding, so there is a need to act quickly. For example, there have been lawsuits against companies for training AI tools using data lakes of thousands of unlicensed works.

Getting AI to understand context is one of the bigger challenges of using AI in software development, says Scot Kreienkamp, ​​senior systems engineer at La-Z-Boy. Engineers need to understand how to formulate challenges for AI. Education programs and training courses can help teach this skill set. Organizations serious about AI technologies should upskill relevant personnel to be capable of rapid engineering.

As organizations grapple with the implications of generative artificial intelligence, a paradigm shift is occurring in software development. AI will change the way developers work. At the very least, developers using this technology will be more efficient at coding and building the foundations of a software platform. However, AI will need an operator to work with it and should not be trusted independently. Insights shared by VMware’s Vanguards underscore the need for prudent integration and the need to maintain guardrails to mitigate software development risks.

 

 

 

More on:

Software Revolution: Cutting-Edge Tools and Technologies 2023

 

x
How to Get Adsense Approval easily in 2022?