Project Management Support Specialist Digital Technologies Expert
UNOPS
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Job Highlight
Under UNOPS supervision, the incumbent will support thought leadership and practical innovation on the responsible use of artificial intelligence in PCVE, shaping global policy, strengthening institutional readiness, and supporting safer digital ecosystems. This position supports the implementation of UNOCT/UNCCT’s Global PCVE Programme, specifically under its Artificial Intelligence (AI) and Preventing and Countering Violent Extremism (PCVE) area of work. The role contributes to advancing UNOCT’s work on emerging technologies and digital ecosystems in PCVE, with a focus on: the responsible and ethical use of AI in PCVE; risks related to AI-enabled harms, including audience manipulation, radicalization pathways, and online exploitation for recruitment to terrorism; and strengthening institutional readiness and capacity for AI adoption in prevention contexts.
Role Purpose
The Project Management Support - Specialist [Digital Technologies Expert] - Retainer will provide substantive technical expertise and programme support to design, implement, and translate research, policy, and capacity-building activities related to artificial intelligence in preventing and countering violent extremism. The role may support any of the following: evidence generation (research, consultations, applied analysis on AI risks and opportunities for the PCVE sector); developing and delivering programme delivery (trainings, stakeholder dialogues, capacity-building initiatives); and developing policy and practice outputs (guidance documents, recommendations, toolkits, training curricula). The Project Management Office (PMO) - Specialist [Digital Technologies Expert] - Retainer will ensure outputs are methodologically rigorous, human rights-compliant, and operationally applicable, while promoting responsible and context-sensitive use of AI in PCVE.
Functions / Key Results Expected
- Development and Planning: Provide technical input into the design and implementation of activities related to emerging technologies and PCVE, ensuring alignment with project objectives, donor requirements, and UNOCT priorities. Develop research frameworks and methodologies to assess risks, opportunities, and use cases of AI in PCVE contexts. Design and deliver targeted capacity-building sessions and mentorship to support institutional adoption of AI governance frameworks and strengthen AI literacy and decision-making capacity in PCVE. Support PCVE-relevant stakeholder mapping across governments, academia, civil society, and private sector actors, including AI developers and platforms.
- Project Set-Up and Closure: Design and implement structured institutional or programmatic assessments (e.g. readiness diagnostics, needs assessments, or baseline analyses) to inform tailored support and engagement. Provide targeted advisory support to partner institutions, including the development or strengthening of policies, governance frameworks, risk management approaches, and operational guidance related to the use of digital technologies. Facilitate coordination with stakeholders, including Member States, international organizations, civil society, academia, and private sector actors. Support the development and delivery of project activities, including consultations, workshops, and capacity-building initiatives.
- Monitoring and Reporting: Conduct qualitative and quantitative analysis of data collected through research, consultations, and project implementation activities. Produce high-quality analytical outputs, including policy briefs, guidance documents, and presentations. Translate technical and policy insights into practical, user-oriented recommendations and tools for policymakers, practitioners, and institutional stakeholders. Support reporting to donors and internal stakeholders, ensuring outputs are accurate, results-oriented, and aligned with project frameworks.
- Quality Assurance: Ensure that all outputs meet high standards of analytical rigor, methodological soundness, and policy relevance. Apply human rights-based, gender-responsive, and conflict-sensitive approaches across all activities and deliverables. Integrate principles of responsible technology use, including considerations related to transparency, accountability, and risk mitigation. Review and refine project outputs to ensure clarity, usability, and alignment with UN standards and best practices.
- Knowledge Building and Sharing: Contribute to the development of knowledge products and practical tools (e.g. assessments, guidance frameworks, toolkits) to support institutional capacity and decision-making. Support the design and delivery of trainings, workshops, and learning initiatives to strengthen stakeholder capacity, including on governance, risk management, and responsible technology use. Facilitate knowledge exchange through stakeholder dialogues, policy discussions, and participation in relevant global or regional forums. Document lessons learned and emerging practices to inform future programming, policy development, and scaling efforts.
Education Requirements
Required: Advanced university degree (Master’s degree or equivalent) in international relations, political science, security studies, social sciences, technology policy, organisational management, or related field with five (5) years of relevant experience is required. OR; First-level university degree (Bachelor’s or equivalent) preferably in international relations, political science, security studies, social sciences, technology policy, organisational management, or related field with seven (7) years of relevant experience is desired.
Experience Requirements
Required: Relevant experience is considered progressively responsible experience in AI/new technologies and PCVE, counter-terrorism, digital harms, or related fields. Experience in research design, analysis, and production of policy-relevant outputs. Experience working on AI, digital platforms, or emerging technologies, including governance, human rights, or risk management dimensions, leadership readiness, or organizational readiness.
Desired: Experience engaging in multi-stakeholder environments, including advising governments, international organizations, civil society, and private sector/tech actors. Experience in designing and delivering capacity-building initiatives, including training or learning tools. Experience working on responsible AI, digital wellbeing, or online harms, including behavioral and societal impacts of technology. Experience translating complex technical concepts into accessible and user-centered content for non-technical audiences. Experience working with digital communities or prevention-focused programming. Experience within the United Nations or similar international organizations.
Key skills/competency
- Artificial Intelligence (AI)
- Preventing and Countering Violent Extremism (PCVE)
- Policy Development
- Capacity Building
- Digital Ecosystems
- Risk Management
- Research and Analysis
- Stakeholder Engagement
- Project Management
- Technology Policy
How to Get Hired at UNOPS
- Tailor your resume: Highlight experience in AI, PCVE, and digital policy.
- Showcase expertise: Detail your research, analysis, and policy development skills.
- Quantify achievements: Use numbers to demonstrate the impact of your work.
- Align with UNOPS: Emphasize collaboration and international organization experience.
- Prepare for interviews: Be ready to discuss AI ethics and global security challenges.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background