AI in research and innovation: a year of transformation

Over the past year, artificial intelligence has shifted decisively from experimental tool to strategic enabler โ€” with major implications for research, innovation, and institutional management. Far beyond its early promise, AI is now influencing the way research is conducted,
funded, evaluated and governed across disciplines.

Advances in generative models, combined with growing institutional uptake and cross-sector momentum, are positioning AI as a foundational technology in the global research and innovation ecosystem. For research managers, this transformation brings both opportunity and challenge: from navigating ethical frameworks and data governance to enhancing research productivity, skillsdevelopment and strategic investment.

US and UK opt out of international agreement on AI safety

The third global AI Action Summit was held in Paris on 10 โ€“ 11 February 2025, and was co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi. Unlike its predecessors, this summit prioritised accelerating AI development and economic investment over safety concerns. Notably, while 58 countries endorsed the Statement on Inclusive and Sustainable Artificial Intelligence, the US and UK declined to sign, citing insufficient attention to governance and national security risks.

Ahead of the summit, the First International AI Safety Report was published on 29 January 2025. Compiled by 96 experts from 30 countries, the OECD, EU and UN, the report assessed risks from general-purpose AI, including job displacement, cybersecurity threats and potential loss of human control. It aimed to inform policymakers but did not prescribe specific regulations.

Key summit outcomes included the launch of โ€˜Current AIโ€™, a $400 million public-interest foundation promoting open, transparent AI, and the ROOST initiative, providing open-source safety tools.

UK backs AISI

The UK’s AI Security Institute (AISI) is the first state-backed organisation dedicated to advanced AI safety for the public interest, AISI will make its findings available worldwide to facilitate an effective global response to the opportunities and risks posed by advanced AI.

In May 2025, AISI published its first Research Agenda, outlining the most urgent questions it is working to answer as AI capabilities grow, and setting out its roadmap for tackling the most complex technical challenges in AI security. AISI is pursuing technical research to ensure AI remains under human control, is aligned to human values and is robust against misuse.

The agenda tackles key risk domain research including:

  • How AI can enable cyber-attacks, criminal activity and dual-use science, ensuring human oversight of, and preventing societal disruption from, AI.
  • Understanding how AI infuences human opinions.

The agenda provides a snapshot of current thinking but also acts as a call to the wider research community to join AISI in building shared rigour, tools and solutions to AIโ€™s security risks.

The AISI Challenge Fund opened on 5 March 2025 and expressions of interest will be reviewed on a rolling basis until all funding has been allocated. It is open to academic/research institutions and nonproft organisations in any country to advance the science of AI safety and security across the priority research areas of safeguards, control, alignment and societal resilience.

The Instituteโ€™s Systemic AI Safety Fast Grants will enable researchers to explore how to protect society from AI risks such as deepfakes and cyberattacks, as well as helping to harness its benefts, such as increased productivity. The most promising proposals will be developed into longer term projects and could receive further funding, with the total value amounting to ยฃ8.5 million.

GenAI and UK academia

A recent survey by the Higher Education Policy Institute (HEPI) and Kortext revealed a dramatic surge in the use of generative AI tools among undergraduate students. The study, conducted in December 2024, found that 88% of students now use AI for assessments, which shows a significant jump from 53% in the previous year.

The most common uses of AI include explaining concepts (58%), summarising articles (48%) and suggesting research ideas (41%). A notable 18% of students have directly included AI-generated text in their work. Students cite saving time (51%) and improving work quality (50%) as primary reasons for using AI. While most students (67%) consider AI skills essential, only 36% have received training from their institutions. However, staff literacy in AI has improved, with 42% of students reporting that staff are well-equipped to help them with AI, compared to only 18% in 2024.

Josh Freeman, Policy Manager at HEPI and author of the report, said: โ€˜It is almost unheard of to see changes in behaviour as large as this in just 12 months. The results show the extremely rapid rate of uptake of generative AI chatbots. They are now deeply embedded in higher education and many students see them as a core part of the learning process. Universities should take heed: generative AI is here to stay.โ€™

The University of Edinburgh has been at the forefront of integrating AI into research, emphasising both its transformative potential and the importance of responsible use. The Generative AI Laboratory (GAIL) is a multidisciplinary initiative dedicated to advancing generative AI technologies.

GAIL focuses on applications in healthcare, climate sustainability and economic development, while emphasising ethical and responsible AI use. The lab fosters innovation through seed funding, fellowships and collaborative workshops, and leverages the universityโ€™s high-performance computing infrastructure to support its research endeavours.

The University of Oxford is leveraging AI in support of a broad research agenda, education and university operations. Its renowned Bodleian Library is digitising rare texts and using OpenAIโ€™s API to transcribe them, making centuries-old knowledge newly searchable by scholars worldwide.

Anne Trefethen, Pro-Vice-Chancellor, Digital, University of Oxford, said: โ€˜This new collaboration marks an exciting step forward, offering fresh opportunities to enrich our research, expand our AI capabilities and foster skill development. By working together, we can learn from one another, advancing the frontiers of artificial intelligence, understanding its impact on education and unlocking its vast potential for the benefit of our university community and beyond.โ€™

EU plans for an AI future

In April 2025, the European Commission announced the launch of the AI Continent Action Plan, a major strategy designed to position Europe as a global leader in AI. The initiative seeks to drive AI adoption, innovation and economic growth by focusing on five strategic areas: computing infrastructure; data; skills; development of algorithms and adoption; and simplifying rules.

The plan sets out a โ‚ฌ10 billion investment in Europeโ€™s supercomputing infrastructures and network of 13 AI Factories across 17 Member States and two EuroHPC Participating States, which foster innovation, collaboration and development in the field of AI. It further sets out a โ‚ฌ20 billion investment through public-private partnerships facilitated by the European Investment Bank to establish up to five gigafactories โ€“ large-scale facilities equipped with supercomputers and data centres designed to support the development of advanced AI technologies. Each gigafactory will house over 100,000 AI processors, more than four times the power of AI Factories. An official call to establish the AI Gigafactories will be published in the fourth quarter of 2025 by the EuroHPC Joint Undertaking.

Elsewhere, the plan aims to deliver effective access to reliable and well-organised data as a prerequisite of widespread AI adoption. In the second half of 2025, the Commission plans to launch the Data Union Strategy to enable businesses and administrations to share data more easily and at scale while maintaining high privacy and security standards. Related to this is the establishment of Data Labs, which will bring together and curate large data volumes from different sources in AI Factories.

To stimulate private sector investment in cloud capacity and data centres, the Commission will propose the Cloud and AI Development Act to ensure the EUโ€™s data centre capacity fully meets the needs of businesses and public administrations by 2035. The goal is to at least triple the EUโ€™s data centre capacity in the next five to seven years, prioritising highly sustainable data centres.

To boost the adoption of AI across companies of all sizes โ€“ especially among SMEs โ€“ and across all sectors, the plan also includes the future Apply AI Strategy, which will serve as the EUโ€™s overarching AI strategy to ensure European companies are global leaders in AI. The Apply AI Strategy will establish links with and be adopted at the same time as the forthcoming European Strategy for AI in Science, which will establish a singular policy approach towards AI in science throughout the EU and accelerate its responsible use.

The European Strategy for AI in Science will aim to make it easier for scientists across the EU to adopt the technology and to carry out more impactful and productive research in key areas. It will pave the way towards a European AI Research Council, in the form of a Resource for AI Science in Europe (RAISE), that would pool resources for scientists developing and applying AI in the EU and drive the advancement of AI in and through science in Europe.

The AI Continent Action Plan will also increase the Commissionโ€™s overall provision of EU Bachelorโ€™s and Masterโ€™s degrees and PhD programmes in key technologies, including AI. The Commission is therefore establishing an AI Skills Academy to provide education and training on skills related to the development and deployment of AI, and particularly GenAI. Through the Academy, the Commission will pilot an AI apprenticeship programme to prepare a pipeline of AI specialists trained on real-world projects and ready to (re-)enter the EU labour market.

Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, said: โ€˜Artificial intelligence is at the heart of making Europe more competitive, secure and technologically sovereign. The global race for AI is far from over. Time to act is now. This Action Plan outlines key areas where efforts need to intensify to make Europe a leading AI Continent. We are working towards a future where tech innovation drives industry and public services forward, bringing concrete benefits to our citizens and businesses through trustworthy AI. This means a stronger economy, breakthroughs in healthcare, new jobs, increased productivity, better transport and education, stronger protection against cyber threats, and support in tackling climate change.โ€™

In April 2025, the European Commission and European Research Area countries and stakeholders published the second edition of the Living Guidelines on the Responsible Use of Generative AI in Research to help the European research community โ€“ including researchers, research organisations and research funders โ€“ use GenAI responsibly. The guidelines aim to ensure that a coherent approach to the technology applies across Europe and are designed to maintain research integrity by balancing the technologyโ€™s benefits with the potential risks.

France forges ahead in AI

France is now the third country in the world in terms of the number of AI researchers and is recognised as the leading hub for generative AI in Europe.

It has moved into the third stage of the national strategy for artificial intelligence (SNIA), which aims for a broader adoption of AI within society, businesses and public services. The main strategy in this phase is the introduction of โ€˜AI Cafรฉsโ€™, which facilitates democratic debate and educational outreach on artificial intelligence.

As part of this phase, France has also founded the first European institute for AI evaluation and security (National Institute for AI Evaluation and Security โ€“ INESIA), launched in January 2025. The Institute focuses on three key areas: analysis of systemic risks related to national security; support for AI regulation implementation; and evaluation of AI models in terms of performance and operational safety.

The โ€˜IA Clusterโ€™ system has now established nine centres across France, which house more than 4,000 AI researchers. The goal for 2030 is to train 100,000 people, including 20,000 in continuing education, and to position at least one institution of excellence among the top international ranks.

The Netherlands looks to place trust in AI

Over the past year, the Netherlands has made significant strides in AI research and innovation, solidifying its position as a leader in the field.

In September 2024, the Innovation Center for Artificial Intelligence (ICAI) expanded to include 18 Dutch knowledge institutions, forming a nationwide AI ecosystem. This collaboration aims to attract and retain AI talent, align research with the United Nations Sustainable Development Goals (SDGs), and enhance the practical impact of AI technologies across various sectors.

The Dutch Research Council (NWO) has launched the ROBUST programme, which will create 170 PhD positions and focus on developing trustworthy AI applications in areas such as healthcare, energy and public services. Building on the ICAI network of academicโ€“industrial collaborations, 54 partners are participating in the research: 21 from knowledge institutions including four universities of applied sciences, 23 partners from private companies and 10 from civil society organisations.

The long-term goal of the ROBUST programme is to create economic impact and contribute to sustainable growth through the development of trustworthy AI. Research will seek to realise breakthroughs in five core dimensions of robust AI: accuracy, reliability, repeatability, resilience and safety.

Germany maintains strategic approach

The Executive Committee of the Deutsche Forschungsgemeinschaft (DFG โ€“ German Research Foundation) is continuing its strategic funding initiative in the field of AI. The programme for Emmy Noether Independent Junior Research Groups in the Field of Artificial Intelligence Methods, which has been running since 2019, will see an additional two new funding rounds in the coming years, with up to 15 new groups receiving funding from 2026 onward.

DFG also launched an โ‚ฌ8 million funding programme to support the development of high-quality data corpora for AI research. The initiative is part of the DFG’s Scientific Library Services and Information Systems (LIS) Programme and aims to create robust foundations for AI advancement in research. The scheme will support projects focused on establishing and developing extensive data collections that can support AI methods across multiple research areas. The initiative emphasises creating resources that enable research beyond individual projects and enhance scientific information provision.

The FLEXI project at the University of Hagen has developed an open infrastructure for experimenting with large language models (LLMs). This initiative enables educators and researchers to explore AI applications in teaching and learning, fostering innovation within higher education.

Goethe University Frankfurt is leading the IMPACT project, a collaborative effort with several German universities, including the University of Bremen and the Free University of Berlin. Funded by the Federal Ministry of Education and Research, this project focuses on implementing AI-based feedback and assessment tools in higher education. The goal is to provide personalised, text-based feedback to students throughout their academic journey, thereby enhancing learning outcomes and academic support.

Tech giants embed themselves in R&I

OpenAI

Announced on 5 March 2025, OpenAIโ€™s NextGenAI Consortium is a $50 million initiative to accelerate AI-driven research and education. It brings together 15 leading institutions including MIT, Harvard, Oxford and Texas A&M to explore AI applications across fields such as healthcare, energy, agriculture and education. The programme provides funding to consortium members, access to OpenAIโ€™s models (like GPT-4o), computational resources and tools like ChatGPT Edu.

Projects range from improving medical diagnostics and advancing AI literacy to enhancing digital libraries and developing new learning models. NextGenAI aims to bridge academia and industry, empowering a new generation of researchers and students to harness AI for societal benefit.

OpenAIโ€™s Deep Research, which was launched in February 2025, is an advanced AI agent integrated into ChatGPT. It autonomously conducts comprehensive, multi-step research tasks by browsing the web, analysing diverse sources including text, images and PDFs, and generating detailed reports with citations. Operating over five to 30 minutes, Deep Research is powered by OpenAIโ€™s o3 model, optimised for reasoning and data analysis. It achieved a 26.6% score on the Humanityโ€™s Last Exam (HLE) benchmark.

HLE is a rigorous benchmark developed to assess the advanced reasoning and knowledge capabilities of LLMs. Jointly created by the Center for AI Safety and Scale AI, HLE comprises 2,500 expert-crafted questions spanning subjects such as mathematics, physics, biology, humanities, computer science and engineering.

Google

Google DeepMind focuses on leveraging AI to accelerate scientific discovery and understanding. The emphasis is on developing AI systems that can generate new knowledge, predict complex phenomena, and ultimately benefit humanity through scientific breakthroughs and innovative technologies.

DeepMind aims to solve intelligence to advance science, creating powerful AI tools like AlphaFold to tackle fundamental challenges in biology, materials science and beyond.

AlphaFold has revolutionised the field of structural biology by predicting the three-dimensional structure of proteins directly from their amino acid sequence. This capability addresses a longstanding โ€˜protein folding problemโ€™ in biology, which previously required years of expensive and complex experimental techniques like X-ray crystallography or cryo-electron microscopy.

In recognition of this contribution, Demis Hassabis and John Jumper of DeepMind were awarded the 2024 Nobel Prize in Chemistry.

Microsoft

Established in 2022, Microsoft Research AI for Science is a global initiative dedicated to accelerating scientific discovery through the application of AI. Led by Dr Christopher Bishop, the programme unites experts in machine learning, quantum physics, computational chemistry, molecular biology and other disciplines to address some of the most pressing challenges in science and society.

AI for Science collaborates with various institutions and leverages Microsoftโ€™s broader research ecosystem to drive innovation. The initiativeโ€™s work seeks to revolutionise scientific methodologies, making research more efficient and enabling discoveries that were previously unattainable due to computational limitations.

Projects include:

  • Aurora Forecasting, an AI model capable of forecasting global weather and air pollution with remarkable speed and accuracy, delivering predictions in under a minute.
  • BioEmu-1, a deep learning model that can generate thousands of protein structures per hour, opening new avenues for protein science and drug discovery.
  • Azure Quantum Elements, a platform combining AI, high-performance computing and quantum tools to accelerate research in materials science, chemistry and pharmaceuticals.

Looking ahead: AIโ€™s lasting impact on research and innovation

AI is reshaping international research and innovation by accelerating data analysis, enabling complex modelling and enhancing cross-disciplinary collaboration. It also introduces challenges, including data bias, reproducibility concerns and ethical considerations surrounding algorithmic transparency.

However, the long-term impacts remain uncertain, particularly regarding intellectual property, the future role of human researchers and the equitable distribution of AI-driven advancements. Ongoing critical evaluation and inclusive policy development will be essential to harness AIโ€™s benefits while mitigating its risks.