Launchpad - Adaptive learning: How the FT hub Approaches evidence
This is a blog post by Grace Owiti as part of the Frontier Tech Launchpad blog series, sharing key content from our 6th session. Learn about the full Launchpad journey here.
In the dynamic world of innovation and technology, turning a promising idea into a scalable solution is full of uncertainties. To navigate this complexity, the Frontier Tech Hub (FT Hub) advocates for an iterative, learning-based approach. Our pilots run structured learning sprints, followed by facilitated moments of review and reflection, allowing teams to make real-time decisions on what to focus on next.
Why this Matters
Evidence is the foundation of good decision-making. Whether you're an innovator developing frontier technology or a policymaker considering the adoption of a new solution, robust evidence helps answer critical questions:
🔍 Does the solution work?
🎯 Is it solving the right problem?
🙋♀️ Does it meet a user's needs?
⚖️ How do different inequities affect users' ability to access or benefit from the solution?
📈 Is it scalable and sustainable?
By systematically testing assumptions and collecting relevant data, teams can use frequent pause points to reflect, identify key lessons, and iterate on their approach. If a project is heading in the wrong direction, adaptive learning enables course correction in real time, avoiding costly mistakes and ensuring resources are directed toward high-impact solutions.
What is Adaptive Learning?
Traditional monitoring and evaluation (M&E) often follow a linear approach: implement a program, assess outcomes at the end, and adjust (if there’s still time and funding). This can lead to missed opportunities to pivot and refine solutions.
Adaptive learning takes a different path. Instead of waiting until the finish line, data is gathered and analysed throughout the process, using frequent pause points to reflect, adapt, and improve.
Think of it this way:
.
Traditional M&E is like using a paper map—you plan your route in advance, follow it, and only at the end do you assess whether you reached your destination.
Adaptive learning is like using Google Maps—you adjust your route dynamically based on real-time traffic data, optimizing for efficiency and success.
Too often, projects follow a rigid model: launch, implement, evaluate (if we're lucky). By the time results come in, the funding cycle has ended, decision-makers have moved on, and the chance to pivot has long passed.
We need a smarter way. Instead of treating evidence as something we check at the finish line, we embed it throughout the journey, making real-time adjustments based on what’s working (or not).
Our Approach: Adaptive Learning
At FT Hub, we take an adaptive learning approach to evidence generation. To effectively integrate adaptive learning into pilot programs through the following steps:
I. Understand the context
Within the FT Hub we encourage teams to first take time to better understand the context and problem space within which they are intending to work, before identifying and testing potential solutions.
Identifying the problems experienced in each context and their scale.
Understanding how different user groups are affected and their varying needs.
Mapping existing actors, processes, infrastructure, and the existing solutions addressing the problem.
The long-term viability of any potential solution will likely rely on grounding it within the existing system, on the FT Hub we advocate for an adaptive learning approach where teams reflect on and update their understanding of the context based on new evidence and make iterative decisions on the potential design of an intervention.
II. Develop a vision
The second step in the adaptive learning process, following and building on an analysis of the problem space and context, is the identification of an initial ‘vision’ of the potential solution.
Innovation teams should think beyond the technical aspects of a solution to create a holistic vision of how it might operate at a sustainable scale. This vision should incorporate assumptions about partnerships, scaling pathways, and business models needed to develop and maintain the solution, as well as the wider operational and systemic components required for its long-term sustainability. A vision should also define the intended long-term impact of the solution, including how to ensure that impact is inclusive.
Once teams have developed an initial, holistic vision, they should identify the riskiest assumptions within it. These assumptions can then be tested through lean, light-touch methods in the next phase of work. Typically, assumptions can be grouped into common themes and learning questions to form a structured learning agenda.
III. Developing a learning Agenda
A structured learning agenda is at the heart of evidence-based pilots. Rather than tracking every possible metric, pilots should focus on answering a few key learning questions—questions that:
Address high-risk assumptions
Are relevant to decision-makers
Can be explored within the pilot’s scope –noting that in the pilot phase our intention is not to begin implementing solutions, but rather do the minimum level of work required to answer learning questions – for example through developing prototypes that can be tested with users, but which lack any of the functionality of an actual solution.
For example, if a pilot is testing AI-driven diagnostics for water infrastructure, key learning questions might include:
Can the AI detect faults as accurately as a human?
Does a prototype application meet the needs of end users within the government?
Will government stakeholders trust AI-generated data?
Who might fund the solution, and what type of evidence will be needed to convince people to invest?
IV. Experiment - balancing rigour and practicality
On FT Hub pilots, teams are encouraged to work through sprints of activity, with each sprint involving multiple experiments, where teams conduct research and test assumptions. One of the biggest challenges in evidence generation is finding the right level of rigour.
Too little, and findings may do little to inform our confidence in the solution, and give us enough feedback to know how, and whether to proceed.
If research and experimentation approaches lack rigour, a pilot's findings may also lack credibility with the intended audience, whose buy-in may be integral to the longer-term potential of a solution.
Too much rigour, and teams may over allocate limited time and resources to questions that can be answered through lighter touch methods – potentially limiting their ability to meaningfully and efficiently test a full range of critical assumptions, and find out whether an idea is, or is not likely to work.
FT Hub uses a minimum viable proof framework to help teams balance scientific accuracy with real-world feasibility—ensuring that evidence is credible, relevant, and actionable.
V. Engage Real Users and stakeholders through experiments and research
Does your solution meet a user's needs?
The only way to know is to conduct research, prior to investing in the design of a solution, to understand the needs of users, and to identify where there are needs that are currently not met by existing solutions.
Once a solution has been identified, there is a need to test with real users in real contexts, to understand if a potential solution can meet their needs, but also understand how different factors such as gender, age, race, disability or income might affect an individual's ability to access and benefit from a solution.
Pilot teams conduct experiments with their target users to gather authentic feedback, ensuring their solution fits real-world needs before implementing.
VI. Maintain Transparency, review, iterate or exit
Evidence generation is only useful if it’s shared and supports decision making.
Teams document their processes, findings, and decisions openly, fostering trust, collaboration, and collective learning among partners and stakeholders.
Within the FT Hub we divide our works into sprints, with time taken at the end of each sprint to review the findings of a set of experiments and identify what it means for a potential solution. Off the basis of sprint findings, teams may wish to adapt their vision for how a solution might work or adapt their learning agenda to focus on new assumptions and questions that experiments have indicated are higher risk and require prioritisation. Within an adaptive learning approach iteration is a necessity and not a nice-to-have, as teams adapt and mature their thinking considering new evidence.
In some cases, a sprint may indicate that a potential solution is highly unlikely to tackle the problem – in which case their teams have on occasion arrived at decisions to stop work and ‘exit’ early, in order to avoid investing time, money and energy into solutions which are highly unlikely to succeed or deliver the intended impact.
Challenges and Considerations
Implementing adaptive learning in organizations can be tricky, especially when existing processes lean towards caution and traditional methods. The key to overcoming this is fostering a culture that encourages flexibility, experimentation, and learning from mistakes—while getting leadership on board by showing the benefits. It also means understanding what different stakeholders need from the evidence and tailoring your approach accordingly, all while ensuring ethical practices like informed consent and protecting participants’ well-being. A critical enabler to success is also adopting multi-disciplinary methods and having an openness to interrogate a wide variety of different types of assumptions, beyond ‘technical’ questions innovation teams might typically be most comfortable exploring.
Conclusion
Evidence isn’t just a checkbox—it’s the key to making smarter decisions, securing stakeholder buy-in, and driving real impact.
By embedding evidence throughout the pilot journey, engaging authentically with users, and balancing rigor with practicality, teams can ensure their programs have a life beyond the initial implementation cycle.
When we adapt, iterate, and learn in real time, we move beyond experimentation—toward solutions that truly work.
If you’d like to dig in further…
1️⃣ Read about the importance of Storytelling in our Launchpad piece by Lil.
📚 Read more about the full journey so far on the Launchpad page here.
⛴️ If you’d like to hear more about this session, please reach out to grace@hellobrink.co.