Premium Member
The FT Tech for Growth Forum is supported by HCLTech, our premium member, which helps to fund the reports.
HCLTech shares its business perspective on the forum advisory board. It discusses topics that the forum should cover but the final decision rests with the editorial director. The reports are written by a Financial Times journalist and are editorially independent.
In the piece below HCLTech gives its view on generative AI. Its views stand alone. They are separate from the FT and the FT Tech for Growth Forum.
Generative AI: a transformative force with a bumpy road ahead
Ashish K Gupta, chief growth officer, Europe and Africa, HCLTech
The hype surrounding generative AI is at a peak. Businesses that look beyond this point see it as a revolutionary technology that has the potential to create great value but they also recognise that, if not managed well, it can cause harm.
That said, companies that embed the technology intelligently into their organisational processes, that have the courage to reinvent their markets and the grit to stay the course, will emerge stronger, more productive and unbeatable in their fields.
Generative AI adoption
In the business-to-consumer realm, the level of adoption of generative AI is already impressive. The fact that AI can communicate with users in their native languages makes it vastly accessible. It is no exaggeration to say that this technology has the power to democratise the world. Consumer-facing services have already experienced the effect of generative AI in areas including education and the creation of music and content.
A good way to think about the rate of adoption of generative AI in B2C is to consider the time taken for the internet to permeate society. Even after 25 years, ecommerce penetration is only at 30 per cent to 40 per cent. The adoption of generative AI is likely to be much faster: in some digital services markets such as music, it could explode. Deep and wide penetration will take time, however. We are at an early stage and many lessons will be learnt.
In business-to-business settings, the potential benefits of generative AI are significant but challenges have to be addressed before the technology is deployed at scale. Additionally there are social, commercial, legal and ethical questions that will need careful consideration.
Regulation to make businesses accountable for their use of generative AI will be put in place more quickly than has been the case for other technologies. Most companies will need to adopt practices that are already the norm in financial services and the pharmaceutical sector. In relation to their use of generative AI, businesses should adopt the mantra of “do no harm”.
We can see significant interest in generative AI from all our enterprise customers. I have been surprised by the range of uses already in place. Adoption is well advanced in sectors including software engineering, content generation, advertising and research and development. This is especially the case in healthcare where generative AI has transformed the understanding of chemistry and biology. It is evident that realising the potential of AI is a prize worth chasing.
I believe the real benefits of generative AI and their effect on businesses will become more apparent in the medium term. This technology will change how work is done. It can also reimagine your enterprise, for instance where you play and how you win.
The danger will come if AI is plonked on top of old processes, old systems and biased data. Not understanding generative AI before making use of it will be dangerous. Strong principles to govern the use of AI will be essential. We have to ground this technology to reduce the possibility of harm. Of course, costs and lock-in will still be issues as the technology moves into the mainstream.
Human-machine interaction
Organisations will be liable for the outputs of generative AI just as they are for the work delivered by their employees. Navigating the risks of human-machine interaction will be important for all organisations.
At HCLTech we believe that machines need to be used to amplify human impact – and hence boundaries are needed. This is especially true for generative AI, and humans should make the final decisions on AI-suggested outputs.
Anyone who has travelled in a car with self-driving capabilities will have experienced amazement, exhilaration and terror. You learn to trust the car to brake in time, navigate a roundabout or go across a winding hilly road. Driving in such a car convinced me that the humans who interact with generative AI machines will need robust training. They will need a thorough understanding of how the technology is used and where it can fail.
Careful management of human-machine interaction will ensure that generative AI is used safely and responsibly but the journey is likely to be long and bumpy.
In the near future the skills required to use and develop technology more generally are likely to change. The demand for quality control and human oversight of AI-generated content will grow significantly. Closing the skills gap will be essential. We should pay particular attention to these areas:
Bias and fairness: We have to be aware of the biases inherent in generative AI and take steps to mitigate them
Explainability: The way generative AI works can be confusing to the uninitiated. We need people who fully understand the models to provide explanations. This will help everyone to place more trust in AI outputs.
Safety: AI models can generate harmful content and hallucinate. We have to be able to identify and manage this phenomenon.
Only when organisations achieve an end to end, closed-loop system – with humans and machines optimised to work with each other – will we begin to see dramatic benefits.