Boards of companies often face the imperative to embrace technological advancements to stay relevant in a rapidly evolving business environment. While some innovations, such as Cloud computing and digital transformation, are revolutionary, others like blockchain have struggled to gain traction.
Certain developments, like robotic process automation or virtual and augmented reality (VR/AR), may initially impact specific industries. Additionally, emerging technologies such as the Internet of Things (IoT), 5G, and quantum computing have the potential to reshape the competitive landscape in future, suggesting that the board keep an eye open for developments in this space.
Furthermore, developments in cybersecurity and privacy laws demand the board’s attention to mitigate risks and ensure compliance. Currently, the spotlight is on artificial intelligence (AI), which presents both opportunities and challenges that boards must address to navigate the shifting technological landscape effectively.
AI stands out as a transformative force, capable of reshaping entire industries and business models. As boards of companies grapple with the imperative to embrace AI, it’s evident that their oversight must evolve to keep pace with this explosive growth.
While true artificial intelligence — a machine's ability to perform cognitive tasks akin to human intelligence — remains a distant goal, contemporary AI encompasses machine learning-powered technologies like Chat GPT, computer vision, and self-correcting algorithms . In this article, “AI” is used as shorthand for current technologies that enable machines to execute tasks traditionally carried out by humans.
The journey towards effective oversight begins with education. Board members must cultivate a foundational understanding of AI concepts, capabilities, and limitations. This can be achieved through various means, including engaging external expertise, or leveraging internal resources within the management team. Equally important is staying informed about the legal and regulatory landscape surrounding AI. India currently does not have specific codified laws or regulations that specifically regulate the use of AI, but is looking at balancing innovation and risk. There are a few discussion papers that outline the context, including at least three approach papers published by the NITI Aayog, and another by seven expert groups from the Ministry of Electronics and Information Technology (MeitY). A few others, including the EU regulations, will also help boards navigate the evolving standards.
However, familiarity alone is insufficient. Boards must translate their understanding of AI into actionable strategies within their organisations. This involves fostering collaboration among cross-functional teams comprising business, legal, and technology experts. These teams play a crucial role in developing a preliminary framework for AI implementation, identifying areas where AI can enhance business processes and improve efficiency, while also assessing associated risks and monitoring industry trends.
Building a robust AI infrastructure requires more than just technological prowess; it demands a foundation of high-quality data. David Edelman and Vivek Sharma highlight this in their Harvard Business Review article. They write, “All AI models are based on data, your proprietary first-party data, especially about customers and their behaviours, is pure gold.” And that “the more you interact with and capture information about your customers, likely by integrating data about them from across the enterprise, and creating new data as you test innovations, the richer your AI models will be.”
The journey towards AI integration is iterative, and progress must be monitored closely. Quarterly reviews conducted by the board serve as checkpoints, allowing for the assessment of AI initiatives' efficacy and alignment with strategic objectives. These reviews provide opportunities for course correction, ensuring that AI remains a value-adding proposition for the organisation.
Despite these efforts, challenges persist. One notable concern is the widening gap between AI’s rapid advancement and boards’ capacity to oversee it effectively. While in an earlier column for this newspaper, I have argued that AI may not replace corporate boards, its transformative potential necessitates proactive engagement from board members.
A 2023 McKinsey global survey highlighted the year as AI’s breakout year, and noted that less than a year after many of these tools debuted, one-third of respondents reported regular use of generative AI tools in at least one business function. A recent FT article puts this number at 75 per cent. The study noted that AI is already on the boards’ agendas for over a quarter of companies. Further, 40 per cent of respondents said their organisations will increase their investment in AI overall.
This trend underscores the need for boards to recognise AI’s potential to disrupt traditional business models and industries, implying they need to proactively adapt to AI-driven transformations within their respective sectors.
In navigating this landscape, boards must not only focus on the technological aspects of AI but also consider its ethical implications. As AI becomes increasingly integrated into business operations, questions surrounding fairness, accountability, and transparency loom large. Boards have a responsibility to ensure that AI initiatives adhere to ethical standards and uphold the organisation’s values.
AI represents a watershed moment for corporations and demands a recalibration of boards’ oversight functions. By prioritising education, collaboration, regulatory compliance, data management, and continuous evaluation, boards can navigate the complexities of AI integration effectively. Moreover, by embracing AI comprehensively and ethically, boards can position their organisations for sustained success in an increasingly AI-driven world.
The writer is with Institutional Investor Advisory Services India Limited. The views are personal. X: @AmitTandon_in