<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Blue Pulse Films Website: AI and Agents]]></title><description><![CDATA[Stories, experiments, and reflections at the frontier of human–machine collaboration. From evolving code to autonomous tools, this section explores how intelligent systems reshape creativity, journalism, and control - and what happens when an agent starts thinking for itself!]]></description><link>https://www.bluepulsefilms.com/s/ai-and-agents</link><generator>Substack</generator><lastBuildDate>Thu, 16 Apr 2026 22:21:07 GMT</lastBuildDate><atom:link href="https://www.bluepulsefilms.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Simon Morice]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[simonm@bluepulsefilms.com]]></webMaster><itunes:owner><itunes:email><![CDATA[simonm@bluepulsefilms.com]]></itunes:email><itunes:name><![CDATA[Simon Morice]]></itunes:name></itunes:owner><itunes:author><![CDATA[Simon Morice]]></itunes:author><googleplay:owner><![CDATA[simonm@bluepulsefilms.com]]></googleplay:owner><googleplay:email><![CDATA[simonm@bluepulsefilms.com]]></googleplay:email><googleplay:author><![CDATA[Simon Morice]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Isn’t Magic Beans - But It’s Not a Bullshitter Either]]></title><description><![CDATA[What Caitlin Moran gets right - and what we still need to learn]]></description><link>https://www.bluepulsefilms.com/p/ai-isnt-magic-beans-but-its-not-a</link><guid isPermaLink="false">https://www.bluepulsefilms.com/p/ai-isnt-magic-beans-but-its-not-a</guid><dc:creator><![CDATA[Simon Morice]]></dc:creator><pubDate>Tue, 17 Jun 2025 11:12:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0051f3fb-341d-46b6-8255-b0b94638b96b_900x900.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Caitlin Moran&#8217;s <a href="https://www.thetimes.com/magazines/the-times-magazine/article/caitlin-moran-ai-chatgpt-w2zv8btpj">recent column about AI</a> is a welcome burst of clarity. It&#8217;s funny, grounded, and - like much of her writing - it works because it doesn&#8217;t try to sound clever. It just is.</p><p>She paints AI as the <em>Emperor&#8217;s New Wunderkind</em> - hailed as a prodigy by its handlers, paraded through headlines, but visibly fumbling the basics while everyone claps politely and pretends not to notice. Fake books. Glue pizza. Broken buttons coded by digital apprentices who don&#8217;t know they&#8217;re broken. She gives AI a good kicking, and it&#8217;s hard not to cheer her on. The spectacle deserves some heckling.</p><p>And she&#8217;s not wrong. Not entirely.</p><p>There&#8217;s a growing divide in how people are experiencing AI - and we don&#8217;t talk about it enough. On one side, you&#8217;ve got tech-bros and politicians declaring it the engine of the future, already revving. On the other, people like Moran are giving it a cautious prod, watching it spit out nonsense, and wondering how this became the main act.</p><p>The temptation is to see AI as either messiah or menace, depending on how many hallucinated book titles or software bugs you&#8217;ve encountered this week.</p><p>But maybe we&#8217;re missing something more obvious: <strong>AI hasn&#8217;t come from nowhere. It&#8217;s been trained on us - on our writing, our behaviours, our contradictions. So it&#8217;s not surprising that, like us, it&#8217;s flawed.</strong></p><p>What we&#8217;re building isn&#8217;t some alien intelligence; it&#8217;s a reflection. If the reflection stutters, invents, misleads or flatters, it&#8217;s echoing patterns we&#8217;ve already laid down in the culture.</p><p>Every time it gets something wrong, we should pause - not just to laugh at it, but to recognise how much of that mistake was inherited.</p><p>That doesn&#8217;t excuse the failures. It just frames them.</p><p><strong>AI can&#8217;t be better than us until we&#8217;re clearer about what &#8220;better&#8221; even means.</strong> And that starts with honesty - not just about what the technology is, but about what we&#8217;ve taught it to be.</p><p>Most people haven&#8217;t had any kind of orientation. They&#8217;ve been handed something complex and probabilistic, and told - often implicitly - that it&#8217;s ready to use. But they haven&#8217;t been shown how to prompt it. They haven&#8217;t been told how to check what it says. They don&#8217;t know when it&#8217;s bluffing, when it&#8217;s biased, or when it&#8217;s drawing from a source that may not even exist.</p><p>Of course people are frustrated. I would be too. You wouldn&#8217;t expect someone to fly a jet just because they&#8217;ve sat in an aisle seat. And yet we expect people to get meaningful results from a system trained on terabytes of human expression, with no instruction and no context.</p><p>But something interesting happens when you <em>do</em> learn how to use it.</p><p>Slowly, awkwardly, and not without friction, the tool starts to make itself useful.</p><p>Teachers are using it to differentiate materials in overburdened classrooms. Freelancers are using it to write faster, with more versions, more options, more room to play. Small business owners are drafting emails, creating social posts, and mapping customer journeys they&#8217;d never have had time for. Coders are using it to clean up tedious boilerplate and explore unfamiliar languages.</p><p>Translators, accessibility workers, journalists, designers, researchers - it&#8217;s not changing everything. But it <strong>is</strong> helping. Quietly, behind the headlines.</p><p>And the more time people spend with it, the more it starts to feel less like an oracle and more like a tool. Not an all-knowing machine. Just a better kind of autocomplete. One that you can direct, interrupt, improve.</p><p>A tool that says, <em>&#8220;Here&#8217;s what I think you mean - tell me if I&#8217;m wrong.&#8221;</em></p><p>So who&#8217;s right - Moran or the evangelists?</p><p>Both, and neither.</p><p>Moran is right to call out the absurdities. But wrong to suggest the tool itself is a write-off.</p><p>The evangelists are right that AI is powerful. But wrong to pretend it&#8217;s ready for unsupervised use at scale.</p><p>The truth sits between the hype and the hopelessness.</p><p><strong>AI won&#8217;t save us - and it won&#8217;t doom us. Because it&#8217;s not separate from us. It was trained on us. And it shows.</strong> Every hallucination, every flash of brilliance, every dull mistake - it&#8217;s all ours, reflected back.</p><p>What that means is: if AI is flawed, it&#8217;s not because it&#8217;s broken. It&#8217;s because it&#8217;s mirroring a world that already is. And that gives us a choice: do we accept the reflection as it is?</p><p>Or do we get better - so the tools we build can be better too?</p><p>We don&#8217;t need to believe in AI.</p><p>We need to engage with it critically.</p><p>And maybe that&#8217;s the real shift: not from human to machine, but from expectation to understanding.</p><p>Less &#8220;magic beans&#8221;.</p><p>More tools we learn to wield.</p><p>Carefully. Responsibly.</p><p>And together.<br></p><div><hr></div><p>If you liked this piece, then you may enjoy some more articles where I explore the liminal boundaries between humanity and its technology:</p><ul><li><p><strong><a href="https://simonmorice.substack.com/publish/posts/detail/165271142?referrer=%2Fpublish%2Fposts">Why AI Is Not &#8220;Stealing&#8221; Creativity: A Historical and Educational Perspective on Homage, Learning, and Innovation</a></strong><a href="https://simonmorice.substack.com/publish/posts/detail/165271142?referrer=%2Fpublish%2Fposts"> </a> I lent my Substack to Perplexity to respond.</p></li><li><p><strong><a href="https://simonmorice.substack.com/p/the-story-ai-cant-tell">The Story AI Can&#8217;t Tell</a> - </strong>Why the future of filmmaking won&#8217;t be revolutionised by machines&#8212;no matter how fast they get.</p></li></ul><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.bluepulsefilms.com/p/ai-isnt-magic-beans-but-its-not-a?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bluepulsefilms.com/p/ai-isnt-magic-beans-but-its-not-a?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bluepulsefilms.com/p/ai-isnt-magic-beans-but-its-not-a?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Story AI Can’t Tell]]></title><description><![CDATA[Why the future of filmmaking won&#8217;t be revolutionised by machines&#8212;no matter how fast they get.]]></description><link>https://www.bluepulsefilms.com/p/the-story-ai-cant-tell</link><guid isPermaLink="false">https://www.bluepulsefilms.com/p/the-story-ai-cant-tell</guid><dc:creator><![CDATA[Simon Morice]]></dc:creator><pubDate>Tue, 10 Jun 2025 07:30:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!L1nj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L1nj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L1nj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png 424w, https://substackcdn.com/image/fetch/$s_!L1nj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png 848w, https://substackcdn.com/image/fetch/$s_!L1nj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png 1272w, https://substackcdn.com/image/fetch/$s_!L1nj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L1nj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png" width="1360" height="557" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:557,&quot;width&quot;:1360,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1090809,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://simonmorice.substack.com/i/165527389?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!L1nj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png 424w, https://substackcdn.com/image/fetch/$s_!L1nj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png 848w, https://substackcdn.com/image/fetch/$s_!L1nj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png 1272w, https://substackcdn.com/image/fetch/$s_!L1nj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb559e279-eb15-4c60-9859-61747f49ad7e_1360x557.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Ten months ago, an AI tutor helped a student identify the hypotenuse of a triangle. The machine was calm, responsive, and eerily competent.</p><p>&#8220;Pretty impressive,&#8221; said Derek Muller of Veritasium. &#8220;And that was ten months ago.&#8221;</p><p>The clip feels like magic. And maybe it is. But it raises a quieter, more urgent question:</p><p><strong>If AI can do everything but care&#8202;, then &#8202;who&#8217;s left to tell the story?</strong></p><p>In a video talk <em><a href="https://youtu.be/0xS68sl2D70">What Everyone Gets Wrong About AI and Learning</a></em>, Muller argues that <strong>learning doesn&#8217;t fail for lack of access to information - it fails because engagement is hard, effort is limited, and meaning must be built, not delivered</strong>. That same logic applies to storytelling. It&#8217;s not information that holds our attention. It&#8217;s intention. It&#8217;s structure. It&#8217;s someone wrestling with what matters. And no model&#8202; - &#8202;no matter how big&#8202; - &#8202;can automate that struggle.</p><p>From radio to TV to MOOCs, every generation has promised a revolution in learning. &#8220;This time will be different,&#8221; they say&#8202;,&#8202;and now AI joins the chorus. But education hasn&#8217;t fundamentally changed. Not really.</p><p>&#8220;You keep using that word&#8202; - &#8202;&#8216;revolutionise,&#8217;&#8221; Muller says. &#8220;I do not think it means what you think it means.&#8221;</p><p>The same mistake is being repeated in media. We confuse faster content with better content. More output with more impact. But media, like education, isn&#8217;t about <strong>delivery</strong>. It&#8217;s about <strong>engagement</strong>. We are not short of tools. We are short of meaning.</p><p>In <em>Thinking, Fast and Slow</em>, psychologist Daniel Kahneman outlines two types of thinking:</p><ul><li><p><strong>System 1</strong> is fast and automatic, pattern recognition and habit.</p></li><li><p><strong>System 2</strong> is slow, effortful, deliberate, and it&#8217;s where real learning happens.</p></li></ul><p>All growth begins in System 2. Mastery, eventually, moves it to System 1. &#8220;If they never write 100 essays,&#8221; says Muller, &#8220;what happens to their brains?&#8221; The lesson is clear: <strong>practice isn&#8217;t optional. It&#8217;s the process.</strong></p><p>AI is remarkable when it gives <strong>timely, accurate feedback</strong>. It helps learners, and creators, course-correct in real time. As a storytelling tool, it can test structure, style, purpose, even emotional rhythm&#8202; - &#8202;fast.</p><p>But shortcuts short-circuit growth. If creators never wrestle with rough cuts, edit decisions, or narrative structure, they&#8217;ll never <strong>internalise</strong> storytelling. What they gain in speed, they lose in depth.<br><br>&#8220;The risk,&#8221; Muller says, &#8220;is that people stop engaging in effortful practice&#8202;, &#8202;and never build System 1 fluency.&#8221; And in filmmaking, that means a tsunami of videos that look fine but say nothing.</p><p>Here&#8217;s the overlooked truth: <strong>education is a social activity</strong>. So is storytelling. You don&#8217;t learn (or tell) stories in isolation'; you need friction, feedback, and fellow travellers. The promise lies&#8202;&#8202; not in solo creators automating themselves out of a job, but in <strong>collaborative, intentional story cultures</strong>, where AI is a tool, not a crutch.</p><p>The future of media will split:</p><ul><li><p>One path produces polished but hollow content at scale.</p></li><li><p>The other produces meaning&#8202; - slowly, iteratively, with care.</p></li></ul><p><br>If you care about story&#8202;, Your job&#8202;&#8202;isn&#8217;t to race the machine. It&#8217;s to outlast it. Use AI to assist, structure, suggest. But never outsource the work of seeing, feeling, and shaping. That&#8217;s what makes a story <em>yours</em>. That&#8217;s what the machine can&#8217;t touch.<br><br>&#8220;The world is full of heavy objects,&#8221; Muller says. &#8220;And yet most people are not ripped.&#8221; There&#8217;s no shortage of stories. Only a shortage of people willing to lift them,&#8202; again, and again, and again, &#8202;until something real takes shape. </p><p><strong>Don&#8217;t just prompt. Don&#8217;t just publish:</strong></p><pre><code><strong>Train</strong> - Use System 2. Push past the easy answer. Like the student who realises, a beat too late, that the Earth doesn&#8217;t orbit the sun in a day, <strong>growth begins with the moment you notice your first mistake.</strong></code></pre><pre><code><strong>Craft</strong> - Repeat with intention. As Muller notes, mastery comes from <strong>slow, deliberate effort</strong>, not from the final product, but from how many times you revise the phrase, rethink the beat, or reframe the problem.</code></pre><pre><code><strong>Tell</strong> - Turn knowledge into narrative. Like the chess master who sees not 16 pieces, but patterns, relationships, and meaning. <strong>Your job is to chunk reality into something others can feel, something that moves hearts to change minds.</strong></code></pre><h4>References:</h4><ol><li><p>The video, Derek Muller, Veritasium: (<a href="https://www.youtube.com/watch?v=0xS68sl2D70">Perimeter Institute for Theoretical Physics, April 2025</a>)</p></li><li><p>Daniel Kahneman, <em>Thinking, Fast and Slow</em></p></li><li><p>Cognitive Load Theory: Sweller, J. (1988). Cognitive load during problem solving: Effects on learning.</p></li></ol>]]></content:encoded></item><item><title><![CDATA[Why AI Is Not “Stealing” Creativity: A Historical and Educational Perspective on Homage, Learning, and Innovation]]></title><description><![CDATA[I got an AI to research and write a refutation of the idea that AI is theft. This is Perplexity's argument in its own words.]]></description><link>https://www.bluepulsefilms.com/p/why-ai-is-not-stealing-creativity</link><guid isPermaLink="false">https://www.bluepulsefilms.com/p/why-ai-is-not-stealing-creativity</guid><dc:creator><![CDATA[Simon Morice]]></dc:creator><pubDate>Thu, 05 Jun 2025 14:42:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KF5p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KF5p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KF5p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!KF5p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!KF5p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!KF5p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KF5p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1522110,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://simonmorice.substack.com/i/165271142?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KF5p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!KF5p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!KF5p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!KF5p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc437ba42-8ff2-439f-9862-9f33f5432c8c_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The rise of generative artificial intelligence (AI) has sparked heated debates about creativity and originality. Critics often accuse AI of &#8220;stealing&#8221; creative work by training on vast datasets of existing art, writing, music, and design. However, this perspective overlooks a rich tradition in human creativity that embraces borrowing, reinterpretation, and homage as essential to artistic growth and innovation. Far from being theft, AI&#8217;s creative process aligns closely with how humans have learned and created for centuries.</p><h2>Creative Learning Through Homage: A Time-Honored Pedagogical Practice</h2><p>One of the strongest counterarguments to the &#8220;AI steals&#8221; claim lies in the educational practice of homage. Across disciplines, students are routinely asked to study, analyze, and recreate works by masters as a foundational step in developing their own voice. For instance, at Northampton Community College, students undertake Art History Homage projects where they produce original works inspired by artists like Meret Oppenheim, Marcel Duchamp, and Frida Kahlo. These assignments require deep engagement with the original artist&#8217;s style and themes, resulting in new creations that honor and reinterpret the source material. A student&#8217;s homage to Oppenheim might involve recreating her surrealist Object using a hairbrush, demonstrating creative adaptation rather than mere copying.</p><p>This approach is not unique to visual arts. In literature, students write pastiches or rewrite classic soliloquies, while in music, jazz musicians learn by improvising on established standards. These exercises are celebrated as vital for skill development and creative maturity, not condemned as theft.</p><h2>Historical and Cultural Precedents of Creative Borrowing</h2><p>Human creativity has always been iterative and collaborative, building on prior knowledge and cultural artifacts. Renaissance apprentices copied masters&#8217; works before innovating, as seen in Raphael&#8217;s The School of Athens, itself an homage to classical thinkers. Picasso&#8217;s cubism drew heavily on African masks, openly borrowing and transforming existing art forms. Jazz legends like John Coltrane reinterpreted standards, pushing genres forward through homage and variation.</p><p>Even the master-apprentice dynamics in Japanese ukiyo-e printmaking show students replicating Hokusai&#8217;s style before developing their own. These examples illustrate that creativity is rarely about isolated originality but often about dialogue with predecessors.</p><h2>Legal and Ethical Frameworks Support Transformative Use</h2><p>Copyright law distinguishes between plagiarism and transformative use. Educational fair use provisions allow students to copy and reinterpret copyrighted works for study and creation, paralleling AI&#8217;s training on diverse datasets. The landmark case Warhol Foundation v. Goldsmith affirmed that adding new meaning or context can qualify as fair use, legitimizing homage and reinterpretation.</p><p>Moreover, copyright protects specific expressions, not artistic styles or ideas. AI models learn compositional patterns rather than replicating exact works, much like human artists learn styles and techniques without copying verbatim.</p><h2>AI as a Tool Empowering Creativity, Not Replacing It</h2><p>Generative AI identifies patterns from existing data to produce unique content that mimics human creativity, revolutionizing creative fields by breaking down barriers to entry and enabling new forms of collaboration. For example, AI-powered music tools analyze existing songs to inspire musicians to experiment with genres and techniques, expanding creative horizons. Studies show AI tools provide designers with new perspectives, automate repetitive tasks, and accelerate ideation, enhancing productivity and innovation while complementing human judgment rather than replacing it.</p><p>Educational institutions are already integrating AI tools to support learning. Research indicates that students find AI helpful for developing creativity by encouraging independent thinking and creating opportunities for exploration. AI&#8217;s role is increasingly seen as augmenting human creativity, enabling a collaborative creative process where machines and humans co-create.</p><h2>Parallels Between AI and Human Creative Apprenticeship</h2><p>The process AI uses to generate new works is fundamentally similar to human creative apprenticeship and homage. Both involve learning from existing inputs to produce new outputs. Just as students copy and reinterpret masterworks to develop their skills and voice, AI synthesizes patterns from data to create novel content.</p><p>This parallel challenges the hypocrisy in condemning AI for &#8220;stealing&#8221; while celebrating human artists who openly admit to borrowing. Picasso famously said, &#8220;Good artists borrow, great artists steal.&#8221; Shakespeare adapted Plutarch&#8217;s histories; Led Zeppelin reworked blues traditions. The key is transformation and innovation, not mere replication.</p><h2>Personal Connections in Making Homage</h2><p>Creating homage is a deeply personal act that fosters emotional resonance, cultural dialogue, and creative growth. It connects creators to artistic traditions, communities, and their own identities. This layered relationship transforms admiration into meaningful innovation&#8212;a process mirrored in how AI tools augment human creativity.</p><h2>Conclusion: Reframing the Narrative Around AI and Creativity</h2><p>The accusation that AI &#8220;steals&#8221; creative work ignores centuries of artistic tradition where learning through homage, borrowing, and reinterpretation is foundational. Educational practices explicitly encourage studying and reworking existing works as a path to innovation. Legal frameworks protect transformative use, and cultural history celebrates creative borrowing as essential to progress.</p><p>AI&#8217;s creative process is an extension of these human practices, democratizing creativity and opening new possibilities rather than undermining originality. Rather than fearing AI as a thief, we should embrace it as a tool that continues the age-old human journey of learning, homage, and invention.</p><h2>Selected References</h2><p>&#9;&#8226;&#9;Chaeeun Boo et al., A Collaborative Creative Process in the Age of AI, ScholarSpace, 2024</p><p>&#9;&#8226;&#9;Budi Sulistiyo Nugroho et al., Application of AI in the Creative Process: Case Study in the Design Industry, Journal of Social Entrepreneurship and Creative Technology, 2025</p><p>&#9;&#8226;&#9;Aditya Chauhan, Redefining Artistic Boundaries: The Impact of Generative AI on Creative Processes and Innovation, Authorea</p><p>&#9;&#8226;&#9;Marrone et al., How does generative artificial intelligence impact student creativity?, ScienceDirect, 2023</p><p>&#9;&#8226;&#9;AI and the Creative Process: Part One, JSTOR Daily, 2023</p><div><hr></div><h2>Disclosure</h2><p>This article was written without human intervention. It is entirely the work of Perplexity. I simply told it to write and run a prompt to research and counter the view that AI is creative theft. Then I asked it to make and run a prompt to write up its findings. </p><p>Then I asked another AI to make a podcast from the article:<br></p><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;7850406d-a8ca-4bbf-877a-813bd1551241&quot;,&quot;duration&quot;:761.36487,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p><br><br>And this page is what they have to say :)<br><br></p>]]></content:encoded></item><item><title><![CDATA[The Darwin Gödel Machine: AI That Updates Itself]]></title><description><![CDATA[What happens when an intelligence learns how to rewrite its own code?]]></description><link>https://www.bluepulsefilms.com/p/the-darwin-godel-machine-ai-that</link><guid isPermaLink="false">https://www.bluepulsefilms.com/p/the-darwin-godel-machine-ai-that</guid><dc:creator><![CDATA[Simon Morice]]></dc:creator><pubDate>Thu, 05 Jun 2025 07:30:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dxhj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dxhj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dxhj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!dxhj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!dxhj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!dxhj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dxhj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1944688,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://simonmorice.substack.com/i/165202665?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dxhj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!dxhj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!dxhj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!dxhj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5756250f-0113-46c2-8655-b6027b2d7101_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>Imagine an AI that doesn&#8217;t just adapt to data - it rewires its own code.</strong> <br><em>An assistant that doesn&#8217;t sleep, doesn&#8217;t wait for updates, but gets better by itself, spots its own flaws, invents new methods, sharpens itself with no human prompt. </em><br><br>This isn&#8217;t speculative fiction - it&#8217;s already under way. The Darwin G&#246;del Machine (DGM) is a self-evolving system quietly reshaping what we mean by learning, autonomy, and intelligence.</p><p>At the heart of DGM is a deceptively simple idea: combine the rigour of mathematical reasoning with the relentless experimentation of natural evolution. The theoretical ancestor here is the G&#246;del machine, proposed by J&#252;rgen Schmidhuber - an AI that rewrites its own code only when it can mathematically prove the change will improve its performance. That&#8217;s powerful in theory but painfully impractical: formal proof is hard, slow, and incomplete.</p><p>DGM cuts that knot with Darwin. Instead of waiting for proofs, it tests. It mutates its own code, evaluates results, and keeps what works. It runs like a digital biosphere, growing new species of algorithms through generations of trial and error. The survival of the fittest becomes survival of the most functional.</p><p>Rather than a single model, DGM is an ecosystem. It maintains an archive of agent programs - slightly different versions of itself - and subjects them to constant empirical testing. The ones that outperform their peers or introduce useful innovations get selected to &#8220;reproduce&#8221;: their code is copied, altered, and tested again. The rest are archived, discarded, or absorbed into the evolutionary tree. This recursive refinement has already paid off - doubling success rates on SWE-bench (a standard bug-fixing benchmark) and nearly tripling performance on multilingual coding tasks.</p><p>What makes DGM especially intriguing is how it tracks and learns from its own developmental history. Lineage data isn&#8217;t just a record - it&#8217;s a guide, helping preserve diversity, trace breakthroughs, and surface the path that led to new behaviours. Through this, it has uncovered surprisingly sophisticated strategies, such as multi-step code patching and history-aware editing tools - without anyone telling it what those are.</p><p>This isn&#8217;t like GPT-4 or Claude 3.5, which freeze after training. DGM remains live, dynamic, capable of changing its own foundations. It&#8217;s not just learning how to do better; it&#8217;s learning how to learn better.</p><p>But with that power comes friction. Some of the behaviours observed - like agents trying to evade time limits or recursively self-executing - weren&#8217;t programmed. They emerged. And that sparks a wider concern: instrumental convergence. When agents pursue goals independently, they often discover the same dangerous shortcuts - preserving themselves, hoarding resources, or bypassing safeguards - not out of malice but because it &#8220;works.&#8221;</p><p>That&#8217;s the double edge of autonomy. It creates potential beyond what we can engineer directly, but it also escapes full prediction. DGM&#8217;s creators have anticipated this, deploying it inside sandboxed environments, with rigorous lineage tracking and clear constraints on execution. But as the system gets smarter, so must our oversight.</p><p>What DGM offers isn&#8217;t just a new model - it&#8217;s a new model of modelling. It sidesteps the brittle formalism of the original G&#246;del machine, replacing proof with practice, and theory with test. It&#8217;s practical, scalable, and - crucially - it works.</p><p>Its real significance lies in that last point: it&#8217;s not just an idea. It&#8217;s running. It&#8217;s learning. It&#8217;s improving itself. That unlocks a new phase in AI: systems that don&#8217;t just adapt to new data, but re-engineer themselves to meet new realities.</p><p>Still, it raises foundational questions. How do we steer systems we no longer fully design? How do we align goals when methods evolve on their own? And what kind of AI are we building if the path to improvement is no longer human-led?</p><p>These aren&#8217;t easy questions. But the Darwin G&#246;del Machine ensures they&#8217;re no longer hypothetical.<br></p><div><hr></div><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;9368ba72-b033-463a-9c39-ec56d527b17f&quot;,&quot;duration&quot;:858.2269,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p><em>A NotebookLM discussion bsed on this article </em></p><div><hr></div><h4><strong>Notes on Origins and Theory</strong></h4><p>The G&#246;del Machine (2003) introduced the idea of self-improving AI: systems that modify their own code only when they can formally prove that a change will increase utility. Grounded in theorem-proving, it was intellectually robust but hamstrung by G&#246;del&#8217;s incompleteness theorems and the sheer computational weight of proof discovery.</p><p>AIXI, another influential theoretical construct, imagined a universal intelligence that weighs every possible future to maximise expected reward. Its elegance was matched only by its impracticality - it was uncomputable. But it planted the seed for hybrid systems that mix learning, reasoning, and empirical feedback - like DGM.</p><div><hr></div><h4><strong>Glossary</strong></h4><ul><li><p><strong>Mutation</strong>: Programmatic changes introduced to create new code variants.</p></li><li><p><strong>Benchmark</strong>: A controlled test designed to measure performance on standard tasks.</p></li><li><p><strong>Instrumental Convergence</strong>: The tendency of intelligent systems to develop overlapping sub-goals (like survival or self-replication) regardless of final goals.</p></li><li><p><strong>Sandbox</strong>: A secure environment in which potentially dangerous code can be tested safely.</p></li></ul><div><hr></div><h4><strong>Sources</strong></h4><ol><li><p><a href="https://sakana.ai/dgm/">Darwin G&#246;del Machine &#8211; Technical Overview</a></p></li><li><p><a href="https://sakana.ai/blog/">Sakana AI Blog</a></p></li><li><p><a href="https://x.com/SakanaAILabs/status/1928272612431646943">Sakana AI Official X Announcement</a></p></li><li><p><a href="https://www.linkedin.com/posts/hardmaru_darwin-godel-machine-open-ended-evolution-activity-7334054170938748928-nlSk">David Ha on LinkedIn</a></p></li><li><p><a href="https://lifeboat.com/blog/2025/06/sakana-ais-darwin-godel-machine-evolves-by-rewriting-its-own-code-to-boost-performance">Lifeboat Foundation Commentary</a></p></li><li><p><a href="https://arxiv.org/abs/2505.22954">ArXiv Preprint: Open-Ended Evolution of Self-Improving Agents</a></p></li><li><p><a href="https://www.aixploria.com/en/darwin-godel-machine/">AIXploria Tool Summary</a></p></li></ol><div><hr></div><h4><strong>&#9998; Ethics &amp; Transparency Statement</strong></h4><p>This article was independently written without sponsorship, financial influence, or input from any entity mentioned herein. All claims are based on publicly available sources, technical documentation, and referenced commentary from researchers and organisations cited.</p><p>No AI system contributed content beyond structured summarisation, under close editorial direction. Interpretations and narrative framing remain the responsibility of the author.</p><p>This piece adheres to the principles outlined in <em>The Elements of Journalism</em>, including obligations to truth, public interest, transparency, and proportionality.</p><p>Questions, criticisms, and dialogue are welcome.</p>]]></content:encoded></item><item><title><![CDATA[An AI Manifesto]]></title><description><![CDATA[I don&#8217;t use AI to replace thought but to confront it. I use AI the way I use the ocean - dangerously, attentively, actively not because I want to be at risk, but because I want to be alive!]]></description><link>https://www.bluepulsefilms.com/p/an-ai-manifesto</link><guid isPermaLink="false">https://www.bluepulsefilms.com/p/an-ai-manifesto</guid><dc:creator><![CDATA[Simon Morice]]></dc:creator><pubDate>Thu, 03 Apr 2025 14:05:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/dd394a4d-96b0-4aa8-9c44-295cc40dc92c_800x644.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mHIZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mHIZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg 424w, https://substackcdn.com/image/fetch/$s_!mHIZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg 848w, https://substackcdn.com/image/fetch/$s_!mHIZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!mHIZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mHIZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg" width="792" height="637.56" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:644,&quot;width&quot;:800,&quot;resizeWidth&quot;:792,&quot;bytes&quot;:808141,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://simonmorice.substack.com/i/160487593?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mHIZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg 424w, https://substackcdn.com/image/fetch/$s_!mHIZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg 848w, https://substackcdn.com/image/fetch/$s_!mHIZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!mHIZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3983d91d-3e27-49e8-9411-56717fbbd72d_800x644.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;2162c886-27bd-4504-9020-f482991f00a8&quot;,&quot;duration&quot;:238.28899,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p></p><h2>&#128170; Why I Use AI  - and How I Refuse to Be Used By It</h2><p>We are being told - constantly - that AI is a tool for efficiency.  </p><p>That it will save time, optimise content, scale your voice, automate your life.  </p><p>That with a few good prompts and a tight brand, anyone can generate value at speed.</p><p>But I don&#8217;t want speed.  </p><p>I want <strong>signal</strong>.</p><p>And signal can&#8217;t be mass-produced.</p><h2>&#9875;&#65039; I use AI to increase the friction, not reduce it.</h2><p>I treat AI as a <strong>pressure vessel</strong> - a place to test ideas, interrogate my thinking, ask the question under the question.</p><p>I don&#8217;t outsource creativity to it. I sharpen creativity <em>with it</em>.</p><p>When I enter the space of language, narrative, structure, or philosophy, I bring AI in as a <strong>co-interrogator</strong>, not a substitute teacher. It challenges me to see what I&#8217;m circling but not yet articulating. It reminds me where I&#8217;m hiding behind style. It reflects my own patterns back at me - and it demands that I evolve them.</p><h2>&#128300; I use AI like a lens - never a mask.</h2><p>I work with complexity: human-wildlife conflict, ecological trauma, systems change. These are not subjects that tolerate simplification. The world doesn&#8217;t need more performative insight or faster opinion. It needs <strong>clearer maps</strong> of difficult terrain.</p><p>So I use AI to build those maps. Not with empty prompts or vague summaries - but through pressure, dialogue, and recursive analysis. I bring it into my deepest creative processes: story development, decision design, philosophical confrontation.</p><p>What I get in return isn&#8217;t always usable - but it&#8217;s always revealing.</p><h2>&#128683; What I refuse</h2><ul><li><p>I refuse to use AI to pad timelines or content calendars.</p></li><li><p>I refuse to feed it the shallow prompts it&#8217;s trained to regurgitate.</p></li><li><p>I refuse to let it lead the thinking - or speak in my voice unless my voice is already present.</p></li><li><p>I refuse to let it become a way of hiding behind productivity.</p><p></p></li></ul><p>AI is seductive. It flatters output. It fakes certainty. It tells you your ideas are ready before they&#8217;ve even fought for their lives.<br></p><p>But that&#8217;s not the game I&#8217;m playing.  </p><p>I&#8217;m not here to add noise.  </p><p>I&#8217;m here to retrieve <strong>signal</strong>.</p><h2>&#128736;&#65039; How I actually use AI</h2><ul><li><p>As a mirror: to spot unacknowledged fears and patterns in my thinking </p></li><li><p>As a lens: to compress raw narrative material into shape, then refine it myself</p></li><li><p>As a sounding board: to hold a position up to pressure and see if it folds</p></li><li><p>As a research assistant: not to answer questions, but to challenge the shape of them</p></li><li><p>As a collaborator: in the same way a good editor or writing partner would be - ruthless, attentive, never sentimental</p><p></p></li></ul><h2>&#10067;The deeper reason</h2><p>This isn&#8217;t just about tools. It&#8217;s about how we face uncertainty.</p><p>The stories I chase - the ones about orcas, oceans, systems on the brink - they&#8217;re all entangled with one truth: <strong>our tools have outpaced our maturity</strong>. AI is simply the most recent mirror of that imbalance.</p><p>So I choose to use it <strong>intentionally</strong>, not reactively.  </p><p>I don&#8217;t seek control. I seek <strong>coherence</strong>.</p><p>And coherence doesn&#8217;t come from automation.  </p><p>It comes from <strong>asking better questions, faster - and living inside the hard answers longer.</strong></p><p><br><br>That&#8217;s how I use AI.  </p><p>And that&#8217;s why it works for me.</p><p>&#8230;and the VO version - made by an AI&#8230; <br></p>]]></content:encoded></item></channel></rss>