Intentionally acting with AI Integrity: What Our Seven Principles Mean for Dream Networks

Generative AI has reached around 53% of the global population in three years, which is significantly faster than the personal computer or the internet did.  Global corporate investment in AI hit roughly $582 billion in 2025, more than double the previous year. However, the average score on the Foundation Model Transparency Index, which measures how openly the major AI companies disclose what their models are trained on, what they cost the planet, and what they do, fell from 58 to 40.

 AI is spreading at unprecedented speed, but the systems we use to understand and govern it are falling behind. This is why we need to have integrity when we use AI. Systems are not in place to ensure we act ethically, with care, and transparently.

Dream Networks works is an agile design and play studio operating in the UK and East Africa, co-creating play spaces with children and communities. We are not an AI company. But we use AI, work with partners who use it, and our communities are increasingly represented (or misrepresented) by AI systems we did not build.

That is why we have written down what we expect of ourselves. Our AI Integrity Note sets out seven principles: human-led, safeguarding first, consent and understanding, bias-aware and context-sensitive, non-extractive practice, planet-conscious, playful, and critical engagement.

This post explains what those principles mean in practice and how we will measure ourselves against each one.

Why AI integrity is not optional for us

There are three reasons our values force us to take a position on AI rather than treat it as a neutral productivity tool.

The carbon impact is considerable. Global data centre electricity demand is projected to reach roughly 1,050 terawatt-hours in 2026 — enough to rank fifth in the world if data centres were a country, sitting between Japan and Russia. The International Energy Agency projects this will nearly double again by 2030. Training a single large model like GPT-3 consumed an estimated 1,287 megawatt-hours of electricity and around 552 tonnes of CO₂. The water footprint is similarly significant: training GPT-3 in Microsoft's US data centres is estimated to have consumed 5.4 million litres of freshwater, and global AI water withdrawal is projected to reach 4.2–6.6 billion cubic metres by 2027, which is more than the total annual water withdrawal of Denmark. Roughly 41% of Microsoft's water withdrawals in 2023 came from areas already under high water stress. For a studio whose practice centres on regenerative design and reduced embodied carbon, AI is not an environmentally neutral tool. Every prompt has a footprint. We need to know when that footprint is justified.

The biased question lands directly on the communities we work with. AI systems trained mostly on Northern Hemisphere data systematically under-represent the Global South and indigenous communities. Studies have shown image generators returning African men in front of thatched huts when prompted for wealthy people, and reverting white children to Black ones,  and Black and minority ethnic doctors to white men,  even when explicitly prompted otherwise. One systematic review of AI-generated educational images published this year identified recurring biases in gender, race, culture, age, body, and disability across studies [1]. Researchers have begun calling the broader pattern digital colonisation and geographic hallucination: confident, fluent outputs that nonetheless misrepresent the technical, cultural and lived realities of the Global South. When we use AI to visualise a play space in Kiryandongo, or to draft a persona for a young mother in a refugee settlement, we are working with tools that, by default, do not see the people we are designing with.

The question of extraction matters because we work with children. AI systems improve by ingesting data. Community ideas, children's drawings, caregivers' stories — these are precisely the kinds of inputs that proprietary models are hungriest for, and precisely the kinds of inputs we have a duty of care over. The transparency gap (a Foundation Model Transparency Index of 40 out of 100) means we usually cannot even verify what a given tool does with what we give it.

These are not theoretical concerns. They are operational ones. They shape what tools we use, when we use them, and what we refuse.

What "human-led" actually means for us

There is one key line that stayed with us as we drafted the Integrity Note: AI is a participatory tool that can enable all people to engage in the design process, not a replacement for human judgment, lived experience, or community knowledge. That is the core of how we use it.

It might help us iterate quickly on visual concepts before a co-design workshop. It might help us synthesise a long policy document. It might help us prototype a layout before a community reviews it. It does not draft our facilitation guides for us. It does not generate community personas without us bringing the lived insight from the field. . And it never, ever sits between a child and a designer.

This is also why we have deliberately chosen not to use AI tools that rely on biometric data, behavioural profiling, or surveillance. There is a growing market for "engagement-tracking" tools aimed at education and child-facing settings. We are not interested.

We cannot change how AI models are trained. We can change what we feed them, what we ask of them, what we take from them, and what we tell our communities when we use them.

That is what the Integrity Note is for. The principles are written down. Now we measure ourselves against them.

How we plan to measure our AI integrity

Human-led: Decision-provenance log opened at project start: every AI-assisted artefact records who made the final editorial decision and what was changed from the AI output.

Safeguarding first: Child-data exclusion checklist: no images of identifiable children, names, voices, or behavioural data are entered into any AI tool.

Consent and understanding: AI disclosure information sheet prepared in local languages and an age-appropriate version for children, used at the start of any session where AI may influence outputs.

Bias-aware and context-sensitive: Bias-review checklist applied to every AI output before it leaves the studio: who is represented, who is missing, what stereotypes are present, what cultural-context errors appear.

Non-extractive practice: Creation of a list of approved tools that specifies which AI tools may be used for which tasks, and explicitly excludes tools that train on user inputs.

Planet conscious: AI necessity test" before any major use: does this task need AI? If yes, what is it replacing, and is the replacement lower-carbon? If no, the task is done another way.

Playful, critical engagement: AI reflection note: what we tried, what worked, what we won't repeat, what we want to test next. Shared internally; selected reflections published.

AI Integrity Note - Dream Networks CICAI Integrity Note - Dream Networks CIC

Visit Us On TwitterVisit Us On Instagram