AI Impact Summits have been critical inflexion points in global AI governance, marking a shift from declaratory, aspirational frameworks to an emphasis on implementation and impact. India will be the first country in the Global South to host the summit in February 2026. The Special Feature highlights India’s culturally grounded, inclusion-focused framing of the upcoming Summit and advocates for accountable voluntary mechanisms, principle-based standardisation, equitable stakeholder participation, and domestically viable outcomes aligned with India’s strategic interests.
Over the past several years, Artificial Intelligence (AI) has shifted from a purely technical topic to one with implications for governance, national security and business. This had led governments, international agencies, industry groups and civil society to initiate debates, consultations, expert meetings and regulatory proposals. Global AI Safety Summits have been held since 2023. The first summit was held at Bletchley Park, United Kingdom (2023), followed by Seoul, South Korea (2024), and Paris, France (2025). The fourth summit will be held in New Delhi, India, in February 2026. This special feature analyses developments from previous summits, outlines India’s proposals for the forthcoming AI summit, and offers ideas to ensure that the debate at the February 2026 meeting is results-oriented.
This special feature has four major parts. The first part highlights global efforts to date to build consensus on AI, while the second presents key developments from the last three AI summits. The third part discusses the upcoming 2026 AI Impact Summit. The fourth part presents concerns and recommendations for India’s policy stance at the AI Impact Summit, followed by a conclusion.
The early discussions in AI governance were spurred by an ‘ethics boom’ in 2016–2018. At the time, private corporations, academic institutions and NGOs released several documents on ethical AI, with 61 released in 2018 alone. This boom was likely not coincidental, but driven by several global incidents such as reports on racial profiling, the first human fatality by a self-driving car, and the Cambridge Analytica scandal, highlighting the perils of personal data being used without consent by algorithm-led targeted political advertising.[1] There was also a growing awareness of AI’s dual-use nature, necessitating political intervention.
While most of these documents indicated growing awareness of AI-related risks, such as bias, the ‘black box’ problem and potential labour displacement, the principles themselves lacked enforcement mechanisms. The corporate and NGO principles were also highly fragmented, depriving them of any consensus or legitimacy. However, state interest in ethical AI led to a push for a ‘soft-law’ approach to AI in some cases, while also translating into engagement on deliberations at the international level.
These discussions had borne initial fruits in 2019. In May 2019, the United Nations Educational, Scientific and Cultural Organisation (UNESCO) issued the Beijing Consensus on AI and Education, urging the deployment of AI to enhance human capabilities and protect their rights. More importantly, just a couple of days after the Beijing Consensus, the Organisation for Economic Co-operation and Development (OECD) became the first intergovernmental body to secure an AI-specific agreement. The Recommendation on Artificial Intelligence (the ‘OECD AI Principles’) of 2019 became a pivotal consensus statement on AI standards from the world’s most advanced economies.[2] The OECD also launched the OECD.AI Policy Observatory to track national progress in AI policies, and used metrics thus gained to launch the ‘State of Implementation of the OECD AI Principles: Insights from National AI Policies’ document in 2021. While these principles have found significant leverage in some countries, most of Africa and the Asia-Pacific do not adhere to them.
The UNESCO and OECD documents, however, did find their way into other global fora. For instance, in 2019, the G20 Osaka Summit formally endorsed AI principles developed along ‘human-centric’ lines, similar to those of the OECD, and has continued to reaffirm them since.[3] The G7 also weaved AI into its discussions through the 2023 Hiroshima AI Process, which led to the creation of two distinct frameworks: the ‘Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems’ and the ‘Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems’.[4]
At the onset of the COVID-19 pandemic, AI growth skyrocketed, and so did concerns about its risks. These events set the stage for UNESCO’s Ad Hoc Expert Group (AHEG) on AI, whose draft recommendation placed strong emphasis on human rights and non-discrimination. Notably, this document also included a dedicated chapter on gender, highlighting the under-representation of women in the AI workforce. The document also called for preserving culture, traditional knowledge systems and languages within AI systems.[5] The content of the UNESCO recommendations revealed a highly aspirational outlook. Yet, they did not see significant implementation due to a lack of enforcement incentives or mechanisms, limited legitimacy (since the US was not a signatory to the framework), and limited inclusion of the Global South.
The Global Partnership on AI (GPAI) has been another key driver of geopolitical discussions on AI. Launched by the G7 in 2018, the forum was designed as a project-oriented coalition to translate theory and ideas into policy action. The GPAI saw membership not only from G7 participants but also from 37 other countries, including Australia, India and Mexico. The broader membership observed two significant trends: evidence of the creation of an AI framework among like-minded democracies, and a willingness to expand its scope beyond the Global West. While GPAI remains an essential framework for discussing AI, it has excluded one of the most critical players in AI, China, thereby reducing the scope of the norms it develops. Additionally, it faces issues similar to those associated with UNESCO norms: there are no accountability mechanisms for project execution.
AI has had a relatively late entry into formal UN resolutions. The UN adopted its first resolution on AI in 2024 (A/78/L.49), which called for “safe, secure and trustworthy” AI, co-sponsored by 125 members and adopted by consensus.[6] While the resolution had no immediate binding effect, it was expected to serve as a blueprint for developing regulations at the national and international levels. Later, a companion UN resolution, A/RES/78/311, was adopted, focusing on international cooperation in capacity-building, which was particularly relevant to the Global South. The value of this resolution is evidenced by its co-sponsorship by over 140 countries.[7] The same year also saw the adoption of the Global Digital Compact, which outlined commitments to ensure that UN norms and existing agreements guide the development and use of new technologies.[8] In 2025, UN members adopted another resolution to establish the UN Independent International Scientific Panel on Artificial Intelligence and the first UN Global Dialogue on AI Governance, which will begin operating in 2026.[9]
Beyond international deliberations, AI norms have been under consideration at the regional level. The EU AI Act, for example, focuses on risk mitigation through a classification system that categorises AI by its impact on fundamental rights and privacy.[10] In the US, the now-defunct Executive Order 14110, also known as ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’, was the first federal attempt to create an AI law. The 2023 document had come only a few days before the Bletchley AI safety summit. It covered a broad range of risks that AI poses, such as the production of Weapons of Mass Destruction, risks to cybersecurity, the generation of deepfakes, and privacy violations.[11] However, in January 2025, President Donald Trump rescinded the order, deeming it an obstacle to US leadership in AI.[12] The Trump administration has similarly rescinded or altered the terms of several other Biden administration laws governing AI and related fields, including US export control regulations for Responsible AI Diffusion (2025, rescinded) and the CHIPS and Science Act (2022).[13]
As of now, US AI governance norms-setting is not a whole-of-government approach but rather an exercise led primarily at the state level, with 38 states adopting more than 100 AI regulations in 2025.[14] California, for instance, a hub for AI and related tech companies, including Meta, OpenAI, and Alphabet.Inc (Google), NVIDIA, and Anthropic passed 18 AI-related regulations and amendments in 2024 alone, focusing on risk mitigation, accountability, trustworthiness, digital privacy, and safety. However, in December 2025, President Trump passed an executive order that has blocked states from creating their own AI laws, while putting existing norms under review to weed out patchwork rules and ‘onerous laws’ that may hinder US competitiveness in the AI market.[15]
China has been working on norm-setting since 2017, when the State Council’s Plan of Next Generation AI Development was devised, with a focus on AI adoption at the departmental level. Additionally, the Beijing Academy of Artificial Intelligence released the Beijing AI Principles in May 2019, which called for research, development, use, governance and long-term planning of AI, to ensure the technology’s healthy development “to support the construction of a community of common destiny, and the realisation of beneficial AI for mankind and nature”.[16]
Between 2020 and 2022, China adopted voluntary industrial standards that served as early indicators of regulatory oversight. This included Guidelines for the Construction of the National New-Generation AI Standard System 2020, which not only coordinated industrial regulations, applications, and key AI technologies, but also addressed ethical and secure development. The legal regulation of AI in China started after 2022, with three notable laws: ‘Internet Information Service Algorithmic Recommendation Management Provisions 2021’ (effective 2022), ‘Internet Information Service Deep Synthesis Management Provisions 2022’, and ‘Measures for the Management of Generative Artificial Intelligence Services 2023’. These laws focused on Content Control, Consumer Protection, Worker Rights, IPR protection and cybersecurity, with protections extended to gig-economy workers. Additionally, the 2025 AI+ policy enabled the creation of ‘pilot platforms’ across various sectors such as manufacturing, medicine, transportation, finance and energy, to promote a practical cross-sector AI integration.[17]
Norms have also emerged in the Global South. India launched the National Strategy for Artificial Intelligence in 2018, with a focus on #AIForAll, outlining healthcare, agriculture, education, smart cities, infrastructure, and innovative mobility and transportation as key areas of intervention. It also underscored the need for ethical, responsible AI designed with privacy and security in mind, and for technological leadership to drive inclusive growth.[18]
Similarly, the Smart Africa Alliance (a coalition of 32 African Heads of State) released the ‘Artificial Intelligence for Africa Blueprint’ in 2021, which focused on developing AI infrastructure and human capital, accelerating tech adoption (‘lab to market’), extending knowledge ecosystem and networking, and strengthening governance and regulatory approaches around data privacy and IPR, to make Africa a global player in AI.[19] The MERCOSUR countries have similarly approved a ‘Declaration of Principles on Artificial Intelligence’ that provides a framework for protecting socio-cultural aspects and workers’ rights, affirms the right to universal internet access, calls for explainability, human control and transparency in AI systems, and promotes AI education and skilling.[20] ASEAN has also launched a ‘Consolidated Strategy on the Fourth Industrial Revolution for ASEAN’, with an economic and developmental focus, emphasising e-government and data security, digital economy resilience and social transformation through skilling and people inclusion.[21]
Across these initiatives, several trends emerge. First, most of these initiatives have been framed as soft laws, with the expectation of geometric growth in AI. This, however, means that many of these commitments lack enforceability or accountability. While there have been attempts to harmonise norms through international and national standardisation, the efforts remain largely fragmented. Secondly, each of these approaches reflects different priorities for different actors. UNESCO’s norms have a broader socio-cultural focus on areas such as education and gender. At the same time, the OECD and GPAI were more focused on ensuring participation and promoting like-minded ‘democratic values’.
Among countries, the US places greater emphasis on innovation and competition in AI, while China focuses on state-led coordination to consolidate and promote social good. The EU places greater emphasis on ethical values, while ASEAN prioritises digital economic growth. Competing priorities have been a significant impediment to a coordinated, global norm-building effort. However, we are seeing some degree of consensus emerging through UN processes, indicating the possibility of more nuanced discussions.
Finally, even amid competing priorities, the need for AI that is both safe and beneficial has emerged as a common thread. Human rights, privacy and social welfare have emerged as powerful common denominators across countries and regions, though the scope and extent may vary. Thus, the question now remains: how to mobilise AI while keeping these concerns built into the very design of the algorithmic fabric. In this context, the AI Summitry process shows progress in discussions.
Bletchley Park, United Kingdom (UK), hosted the first global summit on Artificial Intelligence (AI) safety, held on 12 November 2023. The Bletchley Declaration was signed by 28 countries, including the EU (with New Zealand joining on 23 October 2024). The UK initiative, the first of its kind globally, was driven by two principal concerns: misuse risks and loss-of-control risks. A stark illustration of misuse would be an AI system facilitating the development of chemical or biological weapons.
AI companies agreed at the summit to provide government agencies early access to their models for safety evaluations. There was support for much-needed international coordination on AI safety, with colleagues from around the world presenting the latest evidence on this critical issue. The organisers also announced that the UK’s Frontier AI Taskforce would evolve into a permanent body tasked with conducting safety evaluations (the UK AI Safety Institute).
Although the summit helped narrow the gap between researchers focused on near-term and longer-term risks, it also revealed a different divide among researchers. There are differences between the open-source and closed-source philosophies in AI development. Supporters of tighter controls argued that the risks posed by advanced AI are significant enough to justify not allowing the unrestricted release of powerful model code. The open-source community refutes this idea, arguing that concentrating AI development within for-profit companies could be equally harmful. They insist that open models permit broader scrutiny and faster progress on safety.[22]
India used the AI Safety Summit 2023 to highlight its leadership in global AI governance, emphasising the need for safety, trust and accountability in the development and use of AI. It underscored the importance of a shared understanding of frontier AI risks, the transformative potential of AI for public services, and the value of wider international collaboration. On the sidelines of the summit, India held a series of bilateral discussions with several countries, focusing on strengthening cooperation in areas including AI safety, emerging and critical technologies, semiconductor ecosystems, cybersecurity and digital governance.[23]
AI Summit, Seoul, South Korea (2024)
The second summit was held in South Korea on 21–22 May 2024 in a hybrid format, with many agencies participating virtually. The event was co-chaired by South Korean President Yoon Suk Yeol and the United Kingdom (UK) Prime Minister Rishi Sunak following the UK’s launch of the series in 2023. Overall, the event achieved its broad objectives and helped sustain momentum from the 2023 UK AI Safety Summit. A fair criticism of the Seoul summit is that extending the agenda beyond AI safety diluted the distinctive focus that distinguished the 2023 UK AI Safety Summit. There is a view that, by incorporating themes such as innovation, inclusion, and industrial policy, the event shifted attention away from its primary focus on risk governance.[24]
During the summit, 16 leading tech companies made new voluntary commitments to promote the responsible development of advanced AI systems. They signed up to the ‘Frontier AI Safety Commitments’, a set of principles and practices aimed at promoting the responsible development of advanced AI systems. Also, 10 countries agreed to launch an international network of AI safety institutes. They include Australia, Canada, France, Germany, Italy, Japan, the Republic of Korea (South Korea), Singapore, the United Kingdom, the United States of America, and the European Union (EU).
Another success during the summit was about some states (27 in number) agreeing to collaborate on developing proposals for assessing AI risks. Through the Seoul Ministerial Statement, these countries committed to establishing shared risk thresholds for cutting-edge AI development and deployment, including criteria for determining when model capabilities might constitute ‘severe risks’ in the absence of adequate safeguards.
The UK AI Safety Institute (AISI), in partnership with the Alan Turing Institute, UKRI, and others, announced £8.5 million in research funding for ‘systemic AI safety’.[25] In general, the Seoul summit broadened the AI governance agenda by linking safety, innovation and inclusivity. It called for shared risk thresholds and mechanisms to determine when model capabilities become unsafe in the absence of proper safeguards. At the same time, the addition of innovation and inclusivity raised concerns about diluting the sharp focus on safety. Nevertheless, the ROK’s emphasis on areas such as low-power AI chips indicates that innovation can complement safety when aligned with clear priorities.
Industry commitments in Seoul reflect the positivity of major industrial players. However, commitments to publishing frameworks that identify intolerable risks and halt development are voluntary; therefore, non-compliance is not punishable. As state capacity grows, governments will be increasingly able to test frontier models independently and reduce reliance on industry-provided assurances. In sum, the summit gave a platform for some critical declarations. More importantly, various deliberations indirectly identified the need for a system to deliver actionable outcomes, including clearer national supervision systems, shared international standards, and coordinated strategies to balance safety with innovation.[26]
From 10 to 11 February 2025, the Artificial Intelligence (AI) Action Summit was held in Paris, France. French President Emmanuel Macron and Indian Prime Minister Narendra Modi co-chaired the summit. Representatives from more than 100 countries attended the summit. France broadened the agenda beyond AI’s potential risks, inequality, and job displacement. The focus shifted to the critical economic opportunities the technology may create. France successfully steered the AI summit away from the safety-focused agenda that had dominated the previous summits in Bletchley and Seoul. On the other hand, the summit also signalled a deliberate reduction in ambition. The joint declaration at the end of the summit was referred to as a ‘statement’ rather than a ‘declaration’. The Paris Statement provided only weak recognition of the concept of ‘AI safety’.
Multilateral and multistakeholder models for AI governance were also debated. On the multilateral side, the summit reinforced the role of the United Nations (UN) by acknowledging the Global Digital Compact (GDC) and major UN meetings and referencing the World Summit on the Information Society (WSIS). The need for a multistakeholder approach was also highlighted. The summit saw announcements of a US$ 300 billion investment in AI. The summit was very critical of AI monopolies, consumer protection, and the misuse of intellectual property to develop AI models. [27]
The countries that attended were requested to sign a ‘Pledge for a Trustworthy AI in the World of Work’, a non-binding declaration. Sixty countries signed the declaration, including Canada, China, France and India. However, the US and UK did not sign the final statement. There was no official clarification from the US regarding the decision not to sign this document. However, based on US Vice President J.D. Vance’s remarks at the summit, it can be inferred that the US is keen to adopt a more industry- and innovation-friendly approach and is concerned that excessive regulation could stifle innovation in the AI sector. The UK was not eager to sign because, in its view, the declaration failed to provide sufficient practical clarity on global governance and did not address more complex questions regarding national security and the challenges posed by AI. On the other hand, it is possible that the UK chose not to antagonise the US, given the broader strategic interests of their bilateral relationship.[28]
As a co-chair, Prime Minister Narendra Modi highlighted that:[29]
Building on momentum from Paris, the governance dialogue on AI will continue at the upcoming AI Impact Summit in New Delhi on 19–20 February 2026. In a way, this was a continuation of India’s leadership in the AI dialogue, since the country had also co-chaired the AI Action Summit in Paris. The Summit has marked a departure from the earlier focus on frameworks, declarations, and commitments toward translating them into action.[30] Notably, the AI Impact Summit will be the first in a series of global AI summits to be held in a Global South country, underscoring the importance of their participation in the international AI dialogue.
The diction used to further the AI Impact Summit signals these tonal shifts. While it recognises previous consensus, it grounds its agenda within a framework that invokes Sutras and Chakras, making it culturally rooted and globally resonant. The use of these specific terms is not merely rhetorical; it signals an intent to move the global AI debate from technological abstraction toward a language of coherence, alignment, and transformation.
Sutra, in Indian traditions, refers to a concise aphorism that distils and presents profound wisdom, serving as a spiritual guide or a manual for life. The Sutras serve as a precise framework for AI development, shaping a vision of technology serving ‘humanity and sustainability’, as the OECD describes.[31] They do not just describe the shape of AI development now, but prescribe what it must look like in the future, and act as the ethical thread and ‘guiding principles’ that bring together disparate actions into a singular, holistic vision. The three Sutras defined are:
The three Sutras are to be operationalised through seven interconnected domains for the focused multilateral collaborations called Chakras. The Chakras are derived from the body’s seven energy centres, as conceptualised in ancient Indian philosophical traditions. They govern different dimensions of life and must be harmonised for holistic well-being. These Chakras focus on the areas needed to shape AI into a global public good and deliver tangible outcomes. Each of the seven Chakras is also associated with a working group that aims to explore ‘insights and policy ideas on key AI themes’ and, through multistakeholder participation, to identify challenges, best practices and actionable outcomes.
The working group on Human Capital is led by the Chairman of the All India Council for Technical Education (AICTE), Government of India, with country co-chairs from the Philippines and Rwanda.[38] In the run-up to the AI Impact Summit, pre-summit events such as the US–India AI and Technology Cooperation Dialogue, CXO Roundtables at Nasscom Technology Confluence 2025, and Mozilla Festival (MozFest) 2025 have also highlighted the importance of human capital as a pillar of inclusive AI. The AI Impact Summit has also launched flagship events, Aayog, including India AI Tinkerpreneur (a partnership among the Atal Innovation Mission, NITI Aayog, and Intel India) and YUVAi, to promote AI education and skills for youth.[39]
India has also expanded its AI training infrastructure: the IndiaAI mission’s Future Skills vertical currently supports 500 doctoral scholars, 5,000 master’s students and 8,000 bachelor’s students. Already, as of July 2025, 200 scholars have been awarded fellowships.[40] AI education has also expanded beyond technical streams, with the IndiaAI Fellowship Program and Portal now supporting 13,500 scholars.[41] By October 2025, 27 laboratories had been identified by the National Institute of Electronics and Information Technology (NIELIT).[42] These initiatives explicitly target the summit’s Human Capital chakra by equipping youth and workers with AI skills.
The Working Group on Inclusion for Social Empowerment is chaired by the Secretary of the Department for Empowerment of Persons with Disabilities (DEPwD), Ministry of Social Justice and Empowerment, Government of India, with country co-chairs from Switzerland and Nigeria.[45] Flagship events such as AI by HER: The Impact Challenge for Women (promoting women to demonstrate AI solutions for issues while promoting gender equity) and pre-Summit events such as ‘AI for Inclusion in India’ panel discussion organised by the Centre of Policy Research and Governance held in New Delhi highlight the relevance of this pillar. Additionally, the upcoming regional AI summits in Meghalaya, Gujarat, Odisha, Madhya Pradesh, Uttar Pradesh, Rajasthan, Kerala and Telangana, organised as part of the broader AI Impact Summit, will also consolidate views on AI engineered for inclusivity.
India has also recognised the need for AI that is inclusive of all cultural, regional, linguistic, gender, and disability-based considerations. This is particularly relevant for the country, given the scale of its multilingualism: 22 official languages and more than 19,500 languages and dialects spoken overall.[46] India has launched initiatives such as Bhashini to develop LLMs capable of navigating 20 Indian languages, and BharatGen AI, a multimodal LLM that provides services in 22 Indian languages.[47] The Supreme Court of India is also using SUVAS, an AI tool that translates judicial documents into regional languages. In Telangana, the government has collaborated with the Gender x Digital hub and The/Nudge Institute, with support from the Gates Foundation, to launch the Sanmati AI initiative to enable women from low-income and rural communities to gain livelihoods by participating in the AI value chain.
The AI Impact Summit has emphasised ‘empowering all nations to participate meaningfully in AI oversight while supporting continued innovation and technological advancement’. [48] The working group tasked with achieving this goal is led by the Head of the Department of Data Science and AI at IIT Madras, with co-chairs from Brazil and Japan.[49] The Centre for Responsible AI (CeRAI), IIT Madras, has organised a Conclave on Safe and Trusted AI as part of the AI Impact Summit on 11 December 2025. Pre-summit events such as e-Raksha by CyberPeace, the US–India AI and Technology Cooperation Dialogue, and the “Operationalising AI Safety” panel discussion by the Centre for Communication Governance have also highlighted the importance of this Chakra.
In the Indian context, it is expected that 13 projects supported by IndiaAI on the theme of safe and trustworthy AI will be presented during the summit. This would include IIT Jodhpur’s contributions to the sub-field of ‘machine unlearning’, which has become crucial for ensuring that a machine learning system does not retain incorrect, corrupted, or harmful training data without fine-tuning or retraining the entire model.[50] In terms of governance, India has launched its AI Governance Guidelines, 2025, which explicitly mention the summit principles of ‘Safe and Trusted’ AI. India has also proposed a draft amendment to the IT rules that would mandate the labelling of AI-generated content on social media.[51]
The working group, headed by the Secretary, Department of Science and Technology (DST), Government of India, and co-chaired by Canada and Singapore, has set its objectives to embed openness, reproducibility and safety as guiding principles for AI-based and enabled research, foster cross-border and interdisciplinary collaboration, and identify cooperative models to bridge gaps in AI assets.[58] AI as a Force Multiplier for Indian R&D and Innovation (R&D and DR&D) and for Innovation has been a significant theme for a pre-summit panel that examined the use of indigenous data, collaborative research models, and strategies to bridge lab-to-market gaps, yielding actionable recommendations to strengthen AI-driven science in India.
The summit is also organising a Research Symposium to facilitate the exchange of multidisciplinary frontier research on AI, thereby supporting inclusive, policy-relevant research. AI Expo is another flagship event, showcasing real-world AI applications, breakthrough technologies and innovations from across the globe that go beyond labs and academia to deliver actionable solutions. In India, start-ups such as Sarvam AI, Soket AI, Gnani AI and Gan AI are pursuing new projects to develop indigenous foundational models. India has also launched its AIKosh data platform, which provides over 1,000 datasets and 208 models for public use.[59]
The pre-summit events such as ORF’s ‘Breaking Barriers: Capacity, Data and Inclusion in AI’ event held in South Africa, ‘From Action to Impact: India–France AI Policy Roundtable’ held in Bengaluru, and ‘Power. Pixel. Parity – Equity in the Age of Automation’ in Delhi focuses particularly on equitable access to AI infrastructure and capabilities. India has, in spirit, markedly widened access to computing at subsidised rates through the IndiaAI Compute pillar, with approximately 34,000 GPUs available for bids at lower costs to all start-ups. Additionally, as noted earlier, AIKosh has made its datasets and models publicly available, notably at no cost, thereby facilitating broader, equitable access.
The Chakra, discussed under the working group, is headed by a Distinguished Fellow at NITI Aayog and co-chaired by the Netherlands and Indonesia. Events such as the AI for Education Summit in Nairobi, the 22nd CII Annual Health Summit 2025 in New Delhi, and ‘Trustworthy AI for Social Good: From Multimodal Learning to Large Language Models’ in Mangalore highlight the need for this Chakra acutely. In fact, all AI Impact Summit events have underscored that AI development and innovation need not only a growth and innovation mindset but also social welfare and solutions engineered into them. Notably, the AI Compendia, including casebooks on healthcare, education and related areas, will feature AI-driven innovations and ideas for public availability, aligning with the spirit of this Chakra.
Deliberations within their respective working groups on the Chakras will ultimately inform the final Leaders’ Declaration. India has also released new AI Governance Guidelines ahead of the AI summit, which explicitly adhere to the seven Chakras and reiterate their importance and India’s commitment to the values enshrined therein.
India is taking the lead in the upcoming AI Impact Summit, with some analysts calling it another iteration of the ‘three-body problem’, referring to three major competing forces that must be balanced: political momentum, stakeholder consensus and implementable on-the-ground outcomes.[62] This tightrope is likely to shape what India can achieve while maintaining reservations that protect its national interests without jeopardising global development priorities.
The AI summit has made a clear departure from the principle-generation approach of Bletchley Park, Seoul, and Paris toward a results-oriented outlook that ensures concrete deliverables, quantifiable standards and commitments, and inclusive institutional frameworks grounded in previously agreed-upon principles. The summit is no longer merely a symbolic gathering; it needs to deliver concrete, real-world applications in the face of rapidly evolving AI. This shift reflects hard-won lessons, as previous attempts at AI frameworks have shown a disparity between consensus and actual implementation. For instance, only one-fourth of the UNESCO AI Recommendation signatories have devised and operationalised the recommended policy tools; since they are non-binding, ratification has become optional.[63] Additionally, AI-specific institutions, such as GPAI, have narrow membership, limiting meaningful governance (if ever implemented).[64]
To create meaningful common ground for discussion, India must steer AI summits away from recurring pitfalls in global governance debates. The focus should not be on hollow or purely aspirational declarations, but on measurable, time-bound and responsible outcomes. An effort should be made to develop coordinated, inclusive frameworks that prevent the widening of the global AI divide. The summit proceedings and outcomes must ensure that the concerns and capacities of the Global South are fundamental to the agenda, as concentrating AI capabilities in a few countries or corporations would undermine trust and equity. At the same time, mechanisms to ensure the execution of commitments must be prioritised; while frameworks must remain voluntary, the commitments therein may be made obligatory to a limited extent.
While democratisation of AI sources has been articulated, its roadmap remains vague. Instead, India must take the lead in promoting a multilateral compute-sharing arrangement in which countries with the capability can create a shared pool for Global South researchers to access transparently and on a need-based basis. This may alleviate some of the stress caused by geopolitical tensions that manifest as restrictions on computing capabilities. Similar commitments must be secured to operationalise shared datasets that are especially relevant to developmental issues. This may include healthcare and epidemiological data, satellite imagery and climate shifts for crop monitoring. There should be minimal obligations for sharing this dataset, not just to ensure equitable access, but also to standardise data, while providing flexibility to avoid requiring countries to share data that may contravene their sovereign interests.
India should also submit a bid for lightweight[65] and adaptive[66] AI. Initiatives such as Kompact AI (which can run on standard Intel and AMD CPUs, making them far more cost-effective than GPUs) must be scaled and used as a case study to inform the development of a low-cost architecture for AI adoption in resource-constrained environments. This will also embed the Indian approach of frugal innovation into global technical standards, making equitable access a built-in design feature rather than an afterthought.
Standardisation is another aspect which the summit must address. The development of technical and safety standards is essential to other aspects of discussions and commitments, including skills development and education, talent mobility, safety toolkits, and more. While AI systems are being incorporated into existing standards, such as ISO/IEC, they can only manage risks and do not address technical, ethical or legal aspects. They are non-enforceable and have limited participation from the Global South. There is also a distinct lack of unified standards for classifying AI, particularly for use cases.[67] Without the creation of appropriate standards, concretisation and consensus on global norms will remain aspirational, and India must pursue a parallel track in AI technical and safety standardisation that explicitly accounts for the needs of the Global South and inclusive governance.
At the same time, it is essential to set realistic expectations. AI innovation is an ongoing process, and both states and private companies are (and need to continue being) cautiously optimistic about its progress.[68] The AI Impact Summit is only an initial step towards prioritising action-oriented discussions over aspirations, and any push for pervasive commitments will undoubtedly lead to pushback, even resulting in several agenda items ending in deadlocks. Notably, representation and resources are uneven; resource-rich nations often dominate discussions, and many developing countries lack the infrastructure to implement advanced AI. Therefore, focus must remain on achievable consensus across projects and voluntary frameworks, particularly on themes that all participants can realistically engage with and that should build capacity.
India must also be cautious about establishing a regulatory framework during these discussions. While it is tempting to push for legally binding frameworks, historical evidence suggests that increased regulatory control over technology deployment does not necessarily lead to equitable distribution. India’s AI guidelines represent a ‘soft-touch’ approach, designed to ensure the country manages risks without imposing compliance burdens or constraining innovation.[69] Any push towards strict, binding regulatory frameworks will constrain the Global South’s innovation capacity while leaving developed nations’ technological advantages intact. Obligations, therefore, need to be asymmetric: compliance should be lighter for AI deployment in development-priority sectors (healthcare for neglected diseases, agricultural advisory systems), while maintaining scrutiny in high-risk applications such as surveillance.
The idea of ‘responsible AI’ must also not be used as a cover for technology denial. There needs to be recognition that frameworks emphasising ‘safe’, ‘trustworthy’ and ‘responsible’ AI may inadvertently be weaponised against developing countries, denying them access to frontier AI capabilities under the guise of risk management. The push for standardisation must come with the caveat that it will be well-defined and transparent, rather than discretionary thresholds that define ‘unsafe’ AI to gatekeep technologies.
Standards, in turn, should remain optional recommendations rather than mandatory requirements for accessing development funding or regional markets, to ensure that countries do not end up importing rigid mechanisms that may not be suitable to their own requirements. To ease adoption, the standards should be framed not as exact prescriptions but as principles that specify what each tier or category seeks to achieve.[70] This leaves room for implementation flexibility without jeopardising AI applications and outcomes.
India also needs to be mindful that global declarations require domestic follow-through, and that the country is ready to adapt to the norms it champions without jeopardising national interests. Therefore, India needs to avoid overcommitting to international regimes that conflict with domestic law or capacity, and to advocate voluntary frameworks over rigid, all-encompassing mandates.
Finally, effective governance requires broad stakeholder participation beyond industry and technical experts. It is necessary to engage governments, academia, policy think tanks and civil society in the formulation of responsible AI policy. This applies not only to India but also to international participation. The Summit must not be driven only by big tech or rich nations. The working groups, with co-chair curation across developing and developed countries, as well as multi-stakeholder and regional pre-summit meetings, represent a positive step in this direction. This trend must continue into, and beyond, the AI Impact Summit itself.
The intensification of global competition in AI and adjacent technologies has driven significant advances, making consideration, deliberation and international consensus on AI safety and risk even more critical. The AI Summitry process, which began in 2023, has reflected this turbulent reality, with its focus constantly evolving to keep pace with the development and proliferation of AI while mitigating its harmful consequences.
The agenda for each of these summits has progressively broadened, starting with frontier-model safety and opportunities at Bletchley, then institutionalising AI safety, inclusion and innovation in Seoul, and finally socio-economic impact and equitable development in Paris. It is expected that the AI Impact Summit in New Delhi will reorient the global conversation on AI from principles to their practical, tangible impact.
The AI Impact Summit will be held on the forefront of three critical realities: one, the unmitigated growth and relevance of AI has spurred a global competition for data and talent, leading to an arms race when its governance isn’t nearly as well developed. In this context, the role of AI summits in crystallising a shared, though still evolving, global governance framework is a crucial avenue to explore.
Second, AI innovation needs to embed inclusion and socio-economic development into its design to ensure it is safe, trustworthy and ethical. Today, computing capacity is disproportionately distributed globally, with 75 per cent in the US, 15 per cent in China, 5 per cent in the UK, and the remaining 5 per cent in the rest of the world.[71] Such unequal distribution does not manifest only in AI capabilities and assets; it continues to widen the chasm between high- and low-income countries, exacerbate socioeconomic disparities, and drive cultural homogenisation unless these risks are mitigated by design. With a special focus on ensuring inclusion, developments at the AI Impact Summit carry a responsibility to pave the way for more equitable AI development.
Finally, Indian leadership at the summit will balance the galvanisation of AI growth machinery with the need to ensure AI’s promise of inclusivity through AI for All, for itself and the broader international community, and the Global South specifically. India’s approach to the Summit has reflected this reality: the country has been slowly developing domestic norms that echo the global dialogue, while also adding its own cultural and spiritual dimension by invoking the Sutras and the Chakras to define the Summit’s priorities. The summit will therefore serve as both a test and a whetstone for India’s ambitions for global leadership in AI development and governance.
Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrikar IDSA or of the Government of India.
[1] Nicholas Kluge Corrêa et al., “Worldwide AI Ethics: A Review of 200 Guidelines and Recommendations for AI Governance”, Patterns, Vol. 4, No. 10, 2023.
[2] “OECD AI Principles”, Organisation for Economic Co-operation and Development (OECD).
[3] “2023 G20 New Delhi Summit Final Compliance Report”, G20 Research Group, 13 November 2024.
[4] “G7 Leaders’ Statement on the Hiroshima AI Process”, The White House, 30 October 2023.
[5] “Recommendation on the Ethics of Artificial Intelligence”, United Nations Educational, Scientific and Cultural Organization (UNESCO), 23 November 2021.
[6] “International: The United Nations Adopts Its First Resolution on AI”, Baker McKenzie, 19 November 2021.
[7] “UNGA Adopts China-proposed Resolution to Enhance Int’l Cooperation on AI Capacity-building”, The State Council, The People’s Republic of China, 2 July 2024.
[8] Claire Melamed, “AI: Opportunity, Risk, and a Tough Test for Global Cooperation”, UN Foundation, 8 December 2025.
[9] Ibid.
[10] Matt Kosinski and Mark Scapicchio, “What is the Artificial Intelligence Act of the European Union (EU AI Act)?”, IBM, 20 June 2024.
[11] “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, TechPolicy.Press, 24 January 2025.
[12] Ibid.
[13] Meghna Pradhan, “From Diffusion to Discretion: Contextualising the US Pivot in Compute Control”, Issue Brief, Manohar Parrikar Institute for Defence Studies and Analyses,4 July 2025.
[14] “Trump Signs Order to Block States from Enforcing Their Own AI Rules”, BBC News, 12 December 2025.
[15] Ibid.
[16] “Beijing Artificial Intelligence Principles”, AI Ethics and Governance Institute.
[17] “China’s AI Drive Aims for Integration Across Sectors and a Wake-Up Call for Europe”, Mercator Institute for China Studies (MERICS), 21 August 2024.
[18] “National Strategy for Artificial Intelligence”, NITI Aayog, Government of India, June 2018.
[19] Stefano Sedola et al., “AI for Africa Blueprint”, BMZ Digital, August 2022.
[20] Atahualpa Blanchet, “New Human Rights in the Digital Age: Mercosur’s Contribution to International Regulation”, Equal Times, 27 November 2023.
[21] “Introducing the ASEAN Consolidated Strategy on the 4IR”, YCP, 3 February 2022.
[22] “U.K.’s AI Safety Summit Ends With Limited, but Meaningful, Progress”, Time, 2 November 2023.
[23] “On Day 2 of the AI Safety Summit 2023, India Takes Decisive Stand for AI to Be Safe and Trusted and Platforms to Be Accountable for Citizens Across the World”, Press Information Bureau, Ministry of Electronics and Information Technology, Government of India, 2 November 2023.
[24] Gregory C. Allen and Georgia Adamson, “AI Seoul Summit”, Center for Strategic & International Studies (CSIS), 23 May 2024.
[25] Tess Buckley, “Key Outcomes of the AI Seoul Summit”, techUK, 22 May 2024.
[26] Ardi Janjeva, Seungjoo Lee and Hyunjin Lee, “AI Seoul Summit Stocktake: Reflections and Projections”, The Alan Turing Institute, 16 June 2024.
[27] Jovan Kurbalija, “The Paris AI Summit: A Diplomatic Failure or a Strategic Success?”, DiploFoundation, 12 February 2025.
[28] “Paris AI Summit: Why Won’t US, UK Sign Global Artificial Intelligence Pact?”, Al Jazeera, 12 February 2025.
[29] “Opening Address by Prime Minister Shri Narendra Modi at the AI Action Summit, Paris (February 11, 2025)”, Ministry of External Affairs, Government of India, 11 February 2025.
[30] Sanjay Kumar Verma, “2026 AI Impact Summit: From Principles To Practice, The Herald Of A New Digital Era”, The Secretariat, 26 November 2025.
[31] Jon Truby, “People, Planet, Progress: An Opportunity for an International Agreement on AI Climate Sustainability at India’s AI Impact Summit 2026”, OECD.AI, 9 October 2025.
[32] “AI Impact Summit”, Government of India.
[33] Ibid.
[34] Ibid.
[35] “India Accelerates AI Self-Reliance: 12 Companies Developing Foundation Models Using 38,000 GPUs at ₹65/Hour; National Large Language Model Slated for Launch by End-2025”, Press Information Bureau, Ministry of Electronics and Information Technology, Government of India, 10 October 2025.
[36] “AI Risks Sparking a New Era of Divergence as Development Gaps Between Countries Widen, UNDP Report Finds”, United Nations Development Programme, 2 December 2025.
[37] “India Accelerates AI Self-Reliance: 12 Companies Developing Foundation Models Using 38,000 GPUs at ₹65/Hour; National Large Language Model Slated for Launch by End-2025”, no. 35.
[38] “Human Capital Working Group”, IndiaAI Impact Summit 2026, Government of India.
[39] Ibid.
[40] “Transforming India with AI: Over ₹10,300 Crore Investment and 38,000 GPUs Powering Inclusive Innovation”, Press Information Bureau, Ministry of Electronics and Information Technology, Government of India, 12 October 2025.
[41] “Transforming India with AI: IndiaAI Fellowship Program and Portal Expanded to Support 13,500 Scholars”, Press Information Bureau, Ministry of Electronics and Information Technology, Government of India, 12 October 2025.
[42] “Transforming India with AI: Over ₹10,300 Crore Investment and 38,000 GPUs Powering Inclusive Innovation”, no. 40.
[43] Alex Krasodomski et al., “Artificial Intelligence and the Challenge for Global Governance: Nine Essays on Achieving Responsible AI”, Chatham House, 7 June 2024.
[44] “Artificial Intelligence and the Sustainable Development Goals: Operationalizing Technology for a Sustainable Future”, United Nations Global Compact, 30 April 2025.
[45] “Inclusion and Social Empowerment Working Group”, IndiaAI Impact Summit 2026, Government of India.
[46] “More than 19,500 Mother Tongues Spoken in India: Census”, The Indian Express, 1 July 2018.
[47] “Transforming India with AI: Over ₹10,300 Crore Investment and 38,000 GPUs Powering Inclusive Innovation”, no. 40.
[48] “AI Impact Summit”, Government of India.
[49] “Safe and Trusted AI Working Group”, IndiaAI Impact Summit 2026, Government of India.
[50] Trisha Ray, “Safety Should Be Front and Center in India’s Vision for Its AI Impact Summit”, Atlantic Council, 24 November 2025.
[51] Shristi Agarwal and Amiya Mukherjee, “India’s New Framework for Online Content Regulation: An Overview of the Recent Amendments to the IT (Intermediary Guidelines) Rules”, Lexplosion, 22 October 2025.
[52] “AI Impact Summit”, Government of India.
[53] “Resilience, Innovation, and Efficiency Working Group”, IndiaAI Impact Summit 2026, Government of India.
[54] Ibid.
[55] Cheena Kapoor, “Can AI Drive Climate Adaptation for India’s Farmers?”, Devex, 1 December 2025.
[56] Sara Frueh, “How AI Is Shaping Scientific Discovery”, National Academies of Sciences, Engineering, and Medicine, 6 November 2023.
[57] “AI Impact Summit”, Government of India.
[58] “Science Working Group”, IndiaAI Impact Summit 2026, Government of India.
[59] Clark Jennings and Akanksha Sinha, “Setting the Agenda for Global AI Governance: India to Host AI Impact Summit in February 2026”, Crowell and Moring LLP, 5 August 2025.
[60] “Democratizing AI Resources Working Group”, IndiaAI Impact Summit 2026, Government of India.
[61] “AI for Social Good: Improving Lives and Protecting the Planet”, McKinsey and Company, 10 May 2024.
[62] Trisha Ray, “Safety Should Be Front and Center in India’s Vision for Its AI Impact Summit”, no. 50.
[63] Huw Roberts, Emmie Hine, Mariarosaria Taddeo and Luciano Floridi, “Global AI Governance: Barriers and Pathways Forward”, International Affairs, Vol. 100, No. 3, May 2024, pp. 1275–1286 .
[64] Ibid.
[65] Lightweight AI refers to AI systems that are optimized to be computationally efficient, less resource-intensive, and more cost-effective.
[66] Adaptive AI are systems that learn continuously and modify their behaviour in real-time, based on the data available, user interactions and changing environments.
[67] “Notes From the AI Governance Center: The Complexity of AI Standardization”, International Association of Privacy Professionals, 7 August 2025.
[68] While we are seeing significant investments in AI, a significant number of companies are also showing cognizance of the possible ‘AI Bubble’. See Dario Amodei, “Anthropic CEO Says Some Tech Firms Too Risky With AI Spending”, Bloomberg, 3 December 2025.
[69] “AI Governance Guidelines: A Bet on Innovation”, EY India Insights, 3 December 2025.
[70] “Recent SEC Comment Letter Reveals the Difference Between Prescriptive-Based and Principles-Based Rules”, Bass, Berry and Sims, 5 November 2020. .
[71] Adrien Abecassis et al., “A Blueprint for Multinational Advanced AI Development”, AI Governance Initiative, University of Oxford, November 2025.