A white paper published in February 2026 by Explore Learning, The Evidence Imperative, arrives as the global EdTech market targets USD 348.41 billion billion) by 2030. Authored by Head of Data and AI Dr Hisham Ihshaish, it argues that AI in childhood education must be grounded in continuous learner assessment. In Ireland, where the Digital Strategy for Schools commits to embedding technology across curriculum levels, the difference between evidence-based and hype-driven EdTech has never been more consequential.

The paper's core argument is compelling and overdue. Claims of personalised learning routinely outpace the evidential infrastructure required to deliver them: a system cannot adapt to a learner it does not know. The framework, grounded in Vygotsky, Bloom, and social constructivism, merits serious attention from Irish education leaders on three grounds: the domain-specificity of AI's effects, the dynamic nature of aptitude, and the irreplaceability of human expertise.

Evidence on AI's educational effects is more nuanced than promoters or critics acknowledge. A 2025 meta-analysis by Dong et al. reports a large overall effect on academic performance (d = 0.92), while Silverman et al.'s 119-study review finds strong writing proficiency gains (g = 0.81) for primary-age children. Yet the same corpus documents negative effects on critical thinking from AI over-reliance, and the OECD's Digital Education Outlook 2026 identifies a mirage of false mastery in which gains from general-purpose AI dissolve once access is removed.

The treatment of aptitude as dynamic rather than fixed is among the paper's most important contributions. Many platforms conduct an initial assessment and proceed indefinitely on that basis; yet a child's zone of proximal development shifts continuously, and a system that fails to track that shift will misposition learners. The Education Policy Institute's 2025 report shows England's GCSE disadvantage gap remains at 19.1 months, while ESRI research identifies socioeconomic disadvantage as Ireland's strongest predictor of underachievement.

Explore Learning's human-AI synthesis distributes responsibilities clearly: technology handles adaptive calibration, scaffolding, and progress monitoring; tutors provide motivational coaching and relationship-building no algorithm replicates. This aligns with the UK Department for Education's January 2026 Generative AI Product Safety Standards, which mandate progressive disclosure and prohibit features fostering cognitive dependency. Ireland's Department of Education has yet to produce equivalent binding standards for classroom EdTech.

Three reforms deserve priority. The National Council for Curriculum and Assessment should develop procurement standards requiring vendors to demonstrate continuous learning-profile tracking before personalisation claims are accepted. Teacher professional development must equip educators to lead at higher-order cognitive levels where technology falls short. The Digital Strategy for Schools to 2027 should also require AI tools in DEIS schools to publish outcomes data, anchoring equity as a design requirement.

The Evidence Imperative argues not against technology but for technology that earns the trust placed in it. With generative AI present in Irish classrooms without consistent oversight and the global market set to more than double by 2030, the case for rigorous human-centred frameworks is urgent. Knowing every learner at every step is not a feature of responsible EdTech. It is its foundation.

(The views expressed by the writer are his/her own and do not necessarily reflect the views or positions of BusinessRiver.)