DESIGN & COMMUNICATION
TECH & DEVELOPMENT
BUSINESS OPERATIONS
STRATEGY, TALENT & MANAGEMENT
ENGINEERING
MICROSOFT BUSINESS APPLICATIONS
The AI Security Graduate Program gives you talent who can secure robust, ethical, and sustainable AI—directly inside your organization.
In close collaboration with you, Nexer Tech Talent and AI Sweden develop the next generation of AI engineers and implementers with deep expertise in AI security.
AI is reshaping organisations at their core. It unlocks enormous potential but also introduces entirely new risks. When AI handles data, powers processes, and guides decisions, traditional cybersecurity alone is no longer enough.
Today, AI is developed and deployed faster than most organisations can secure it. The result is growing exposure and an accelerating skills gap.
The AI Security Graduate Program is the most ambitious talent initiative in AI security. Designed for organisations that want to stay ahead — not respond after the fact.
Through the AI Security Graduate Program, you build critical AI security competence directly into your organisation. For 12 months, you gain access to talent working hands-on with your real challenges, in your environments. After the program, they transition into full-time roles with you, continuing to grow and spread that expertise internally.
The program blends world-leading research, advanced AI labs and practical work on real security challenges with a unique ecosystem of experts from industry and academia.
Developed in collaboration with leading universities, specialists and partners across both the private and public sectors, the program is designed to strengthen your long-term capability to build secure and resilient AI.
Solutions to Real Challenges
The talents work hands-on with your actual AI development or implementation — integrating security from day one. This program isn’t about hypothetical cases. It’s hands-on impact in your environment, on your data, with your risks.
Expertise from AI Sweden
Access to advanced lab environments and world-leading researchers and experts in Secure AI, AI Safety, and AI Governance including specialists from Madison CyberLabs, Chalmers, Uppsala University, and AI Sweden.
Immediate reinforcement for your team
A strategic way to build internal AI security expertise.
Our talents grow with your organisation and become a long-term asset beyond the program.
AI AcT, NIS2 & Cyber Resilience Act
The program equips your organisation with the knowledge and methods needed to meet upcoming regulations — while strengthening your ability to protect AI systems against attacks, manipulation, and data breaches.
The AI Security Graduate Program offers a unique combination of deep expertise, hands-on application in real business environments, and access to Sweden’s rapidly growing AI ecosystem.
It creates a direct path for organisations to address their most critical AI security challenges, while building the long-term capability required for the future.
For participants, the program provides an accelerated entry into one of the most in-demand roles on the market, enabling them to contribute from day one. By working closely with both the partner organisation and AI Sweden’s experts, they develop the skills to assess risks, strengthen model robustness, and implement security principles in real AI systems.
The program is designed for organisations that:
implement or plan to implement AI
develop their own models or rely on third-party AI
need to strengthen internal AI capability
are affected by the AI Act, NIS2 or the Cyber Resilience Act
want to lead the way in secure and responsible AI development
AI Security Graduate Program is aimed at companies, government agencies and public-sector organisations that are developing or deploying AI solutions, and need talent with the expertise to manage the new and complex security challenges that follow.
It is relevant across all sectors — from industry, government and tech to finance, logistics, healthcare, energy and municipal operations.
Get in touch to explore how the AI Security Graduate Programme can support your specific challenges.
As a partner organisation, you gain access to a talent with a strong technical foundation who continues to develop throughout the program within:
AI security & AI safety
Adversarial machine learning
Model evaluation & robustness
Secure AI development
Data governance and AI Act, NIS2 och Cyber Resilience Act requirements
Risk management for AI systems
This means you don’t just add capacity — you build long-term AI capability inside your organisation.
The AI Act, NIS2 and the Cyber Resilience Act require organisations to demonstrate that their AI systems are secure, transparent, robust and traceable. The program develops talent who can:
Classify AI systems according to risk level
Ensure data quality, integrity and traceability
Document models and pipelines in line with regulatory demands
Implement security and safety by design
Conduct risk assessments and continuous monitoring
Integrate AI security into the organisation’s governance and processes
In short: the programme helps organisations stay compliant, competitive and future-ready.
Cybersecurity protects the infrastructure. AI security protects the intelligence.
Traditional cybersecurity focuses on preventing unauthorised access to networks, systems and data. AI security complements this by ensuring that AI models behave robustly, reliably and ethically — even when exposed to faulty training data, manipulation, bias or attacks that target the model itself.
Where cybersecurity stops external threats, AI security reduces the risks that arise when developing, implementing and operating AI systems.
Since 2014, we have supported more than 2,000 tech talents into new roles within AI, data, software development and cybersecurity, while helping our partners build sustainable, future-ready capability.
At Nexer Tech Talent, we work closely with candidates to understand their drivers, skills and ambitions. This gives us deep insight into what shapes the specialists of tomorrow and how organisations can unlock that potential. Through strong collaboration with academia and the wider Nexer Group — and with access to a unique network of emerging tech talent — we match candidates with the right abilities to organisations that want to stay ahead, both now and long-term.
Nexer Tech Talent manages the entire recruitment process and ensures that every candidate has the technical depth, analytical ability, interest in AI security, curiosity and communication skills required to contribute and share knowledge.
After 12 months, the goal is for the talent to transition into a permanent role within the partner organisation.
The talent works directly with your real AI security challenges. This can include:
– analyzing risks in existing AI models
– testing robustness and resistance to attacks
– developing secure AI workflows and pipelines
– identifying bias, data issues and vulnerabilities
– establishing documentation and governance for the AI Act
– supporting data scientists and developers in secure AI development
During the 12 months, their time is typically divided as follows:
2–4 days per week working hands-on with your projects inside the partner organisation
1–3 days per week at AI Sweden, receiving training, supervision, research access and working in advanced lab environments.
This creates a rare combination of academic insight, practical application and strategic capability.
The talent quickly becomes a valuable part of your team, helping you build secure, responsible and future-proof AI.
Applications are open. You’re welcome to submit your application here.
Candidates applying to the program typically come from backgrounds such as:
computer engineering
computer science
AI and machine learning
cybersecurity
software development
mathematics or engineering physics
The training covers both theory and hands-on practice, with modules in:
AI security and secure AI engineering
AI safety and responsible AI design
Adversarial attacks and defence methods
Model monitoring, evaluation and logging
AI governance, ethics and regulatory frameworks
The AI Act, NIS2 and international compliance
Documentation and risk management processes
All training is delivered by experts from AI Sweden, researchers and international partners.
PhD, AI Sweden
Dr. Bridges is a mathematician and researcher at AI Sweden, with extensive experience from Oak Ridge National Laboratory in the United States, where he worked on advanced research for the US Department of Energy.
He has broad expertise in differential privacy for machine learning, AI applications in healthcare, reverse engineering of automotive CAN systems, anomaly detection and enterprise network security.
With experience from some of the world’s most high-security environments, Dr. Bridges contributes deep competence in AI security, model robustness and method development, and is a key resource in AI Sweden’s work to strengthen Sweden’s ability to develop safe and reliable AI.
President, Dakota State University
Member of AI Labs Advisory Board at AI Sweden
Dr. Griffiths is the President of Dakota State University and one of the United States’ most experienced leaders in technology, research and higher education. She has held several national appointments, including on the National Science Board and the National Security Commission on Artificial Intelligence, where she leads work on developing the future AI talent pipeline.
Dr. Griffiths has led major projects for US government agencies such as NASA, the National Science Foundation and defence and intelligence organisations, as well as for global companies including IBM and AT&T Bell Laboratories.
Read more
Dean, The Beacom College of Computer & Cyber Sciences
Mary Bell is the Dean of The Beacom College of Computer and Cyber Sciences at Dakota State University, and a central driving force behind the development of the university’s advanced programmes in AI, cybersecurity and quantum technologies.
She leads the creation of new academic programmes, strengthens research collaborations, and builds environments where students work directly with real-world technical challenges.
Bell has a long career within the US defence and education sectors, including serving as a professor at the National Defense University and as a leader in the US Army with extensive experience in aviation and intelligence operations. Today, she is a key force behind DSU’s ambition to become a leading institution in cyber sciences, with a focus on innovation, partnerships and the technical competencies of the future.
Read more
Director of AI Labs, AI Sweden
Dr. Nordlund has been a driving force behind the establishment of AI Sweden and has a broad background spanning both academia and industry. He has previously served as Director of Technology, Strategy and Technology Acquisition at Saab Group, VP of R&D at Emerson Process Management – Level and Marine, and Head of Advanced Graduate Programmes at Zenseact, a subsidiary of Volvo Cars.
View Mats Nordlund’s talk from Nexer Summit here.
Structure
– 2–4 days/week at the partner organisation:
Talents work on business-critical AI security challenges close to your operations.
–1–3 days/week at AI Sweden:
Training, lab environments and access to international expert competence.
– Change agent role:
Talents actively spread knowledge and strengthen AI security capability inside your organisation.
– Employment model:
Consultant through Nexer during the programme, with the opportunity to transition to a permanent role after 12 months.
Program Phases
– Months 1–3: Introduction, fundamentals of AI/ML security, onboarding and case analysis.
– Months 4–9: In-depth case work guided by experts from AI Sweden.
– Months 10–12: Final delivery, internal knowledge transfer and transition to employment where desired.
Result
– Developed AI security solutions.
– Talents with specialised AI security competence who can transfer knowledge into the organisation — a critical capability in high demand.
AI Sweden is the national centre for applied AI. More than 170 partners — companies, public agencies and universities — come together here to accelerate the development of safe, sustainable and responsible AI.
With one of Europe’s strongest AI ecosystems, AI Sweden builds the knowledge, tools and research that truly move the needle. From advanced labs and world-leading expertise in language models and secure AI, to broad collaborations that strengthen Sweden’s innovation capacity and its ability to lead in a rapidly shifting technological landscape.
AI Sweden’s mission is clear — and critical: to make AI a force that benefits society, increases competitiveness and creates value for everyone in Sweden.
Since 2014, we have had the privilege of supporting more than 2,000 tech talents into new roles within AI, data, software development and cybersecurity — while helping our partners build sustainable, future-ready capability.
At Nexer Tech Talent, we work closely with emerging tech professionals to understand their drive, skills and ambitions. This gives us valuable insight into what shapes tomorrow’s specialists, and how organisations can unlock that potential.
Together with Nexer Group and our global network, we follow developments in technology and industry up close. This enables us to match talents with the right capabilities to organisations that want to stay ahead — both today and long-term.
We are looking for newly graduated MSc or PhD candidates in computer science, computer engineering, mathematics, engineering physics, cybersecurity — or a related technical field. You have a strong interest in AI, security and complex problem-solving, are curious, driven and want to make a real impact.
Book a meeting with us and get an introduction to Sweden’s leading talent program in AI security.
