# AI Podcast & NPC Generation

*2026-02-07 — AI — By Kim*

> Building a yes/no question app that evolved into AI-generated personas creating questions, answering them, and producing a fully automated podcast using Cloudflare Workers, WASM, and Rust on the edge.
So I made an app as one does.
I usually create lots of different web apps or mobile apps, game jams or dev tools. It's explorative and fun. For me that just as much artwork as making games or painting.
It can be interesting where software creation leads if you just let it lead.

I had this simple idea: what if we make a database with yes and no questions - binary answers, that is fun and easy to build and design for and easy to answer...
Then spun away into AI token munching podcast land of edge compute mp3s.

[🔊 Audio](/podcast-episode-1.mp3)

[🔊 Audio](/podcast-episode-2.mp3)

## I called it Open Question

The design rules where simple.

1. Someone creates a question - the question is stored in the database.
2. Someone answers the question - either **yes** or **no**.
3. All answers are stored in the database.
4. We get some nice data to analyze and make some nice views and draw insights from.
5. You can create your own questions and get answers.
6. We pick one question per day as the main one and push it to the users.


---

![Open Question notification](/images/news/open-question-notifications.jpg)
*Apples push notifications service is used to send notifications to users.*

---

A super simple swift ui view example of the voting:
```swift
ScrollView {
    QuestionImage(questionId: q.id, questionTitle: q.title)

    Text(q.title)
        .font(.title2)
        .padding(.horizontal, 32)
        .multilineTextAlignment(.center)
        .padding(bottom, 8)
        .padding(.top, 8)
        .minimumScaleFactor(0.5)

    Text(q.explanation)
        .font(.footnote)
        .padding()
        .padding(.bottom, 16)
        .lineLimit(nil)
        .fixedSize(horizontal: false, vertical: true)

    HStack(spacing: 48) {
        FloatingButton(
            action: {
                answerButtonTapped(randomFlip)
            }, label: randomFlip ? "Yes" : "No",
            positive: true)
        FloatingButton(
            action: {
                answerButtonTapped(!randomFlip)
            }, label: randomFlip ? "No" : "Yes")
    }
}

```

## AI Assisted question generation
So wouldn't it be cool if we could use LLM for `insert anything at this point` - Yes.

Curation of a good yes and no question is one thing, but generally we also want to generate some more metadata.
For each question suggestion we generate some data points.


```ts
// Zod schema for AI and OpenAPI SDKs but also confusable usable by OpenAI SDK.

error: z.boolean(), // is this a valid question or all out of tokens etc
reason: z.string().optional(), // why is this an invalid question
message: z.string().optional(), // friendly message
title: z.string().optional(),
category: z.string().optional(),
keyword: z.string().optional(),
explanation: z.string().optional(), // explain and reason about the question
tags: z.array(z.string()).optional(),
```

More data nice!

## Fully AI generated questions
Okkkaaaay but this app lives on my phone, a Its a little Cloudflare managed SQL Database.
}>D1 Database and the AI is nice to me and all.


And all my deeply philosophical questions have been asked:


    ![Is pinapple a popular topping on pizza?](/images/news/pizza2.png)
    ![Is pizza considered a good food?](/images/news/pizza1.png)


So wouldn't it be cool if we could use LLM for `insert anything at this point` - Yes.

OK! So for an LLM to continuously generate interesting questions we can use a little something from the foundation models playbook.

1. Set up a web scraper going hunting for data.
2. Pick some good RSS -  Really Simple Syndication, an ancient form of XML markup.
}>RSS feeds - *The news sites are a positive source of daily fun things right (right? help)?*
3. Use an LLM to generate questions from the scraped data.
4. Post the question as an AI persona into the API and DB and show it in the app.


---

**But what's an AI persona in the app anyway?**

Okay let's just make up some dudes and dudettes.
Eeeh.. here is a game dev!
![An AI game developer](/images/news/ai-gamedev.png)

*(Thanks [Black Forest Labs](https://bfl.ai/) I ran flux this offline and heated my house from the GPU - Graphics Processing Unit, the think in your computer that goes brrrrrrrrr.
}>GPU).*


**Sara Hjort**
*An independent game developer who thrives on creative storytelling,
    technical experimentation, and pushing artistic boundaries through interactive experiences.
    Prefers narrative-driven design and self-expression as the key drivers of meaningful worlds.*
    **#PrettyGenericButFine**

OK let's give this persona some reasoning, personality traits, likes and dislikes, cultural background and motivation.
That's a human right?


[🎬 Video](/clips/personas-flash.mp4)

`I generated and hand-prompted 50 of these NPCs.`


[
  
    
  
  {"{"}
  "download full"
  :
  "sara-hjorth.ts"
  {"}"}
](/sara-hjorth.ts)

---

I assume this atetist artist will bring in some great Yes and No questions from the [Christian Belive site](https://www.ibelieve.com/rss/) and other questiable sources.

But more importantly reasonable updating websites. Anything from Reggae news, Trans rights, BBC, and other reputable sources.

And then we match a question with one of the personas.

Remember we pick a question for each day.

But now that we've gone down this road we can just make the AI personas answer questions based on their beliefs and values.

![An AI Gam developer](/images/news/sara-onpoint.jpg)


## Now that's an app

[🎬 Video](/clips/open-question-app.mp4)


## Open Question Podcast

So now we got an app, we got personas generating and answering questions.
We also generate blog posts from the results. And you could chat with the personas for (reasons unknown)?

For multi-media empire something was missing - How about a podcast? - How do you do that in the cloud without a dedicated server?
Turns out you can use  WASM - Web Assembly, a binary instruction format for a stack-based virtual machine. Works in browesers and stuff.
}>WASM and Rust on Cloudflare Workers to combine audio sources and generate audio files.


### Engineering cooking podcast recipe


```ts

  WorkflowEntrypoint,
  WorkflowStep,
  WorkflowEvent,
} from "cloudflare:workers";

export class WorkflowPodcast extends WorkflowEntrypoint<Env, Params> {
  async run(event: WorkflowEvent<Params>, step: WorkflowStep) {
        // Go on an adventure
      await this.DoSomeWork();
      await this.doEvenMoreStuff();
      await this.callTheRustRPCService();
      await this.finish();
  }
}
```

1. Get the week's questions from the database.
2. Force an LLM - Large Language Model, thats your AI.
}>LLM to make an intro, highlights of the questions, and anything else you want to direct the script with.
3. Generate the podcast scripts for each chunk.
4. Pick your favorite LLM that generates audio and voice with your personas.
5. Generate an MP3 background tune with another LLM.
6. Write some timeline combination code that works on edge compute with WebAssembly and Rust.
7. Double check that ```ffmpeg``` indeed still doesn't work on edge.
8. Generate metadata XML and the final MP3.
9. Generate an RSS feed.
10. Upload to Apple.
11. Do all this in Cloudflare Workers workflows.
12. Profit?

```rust

// some WASM compatible rust on edge compute in the name of audio fillers..
let sample_rate = 44100; // 44.1 kHz
let padding_duration = 5 * sample_rate; // **5 seconds padding before & after speech**
let fade_out_duration = 5 * sample_rate; // **5-second fade-out at the end**
let bgm_length = bgm_samples.len();
let outro_duration = 16 * sample_rate;

// **Step 3: Prepend 3s of Background Music Before Speech**
let mut bgm_index = 0;
for _ in 0..padding_duration {
    let bgm_sample = if bgm_length > 0 {
        (bgm_samples[bgm_index] as i32) / 4 // **Lower volume**
    } else {
        0
    };
    mixed_samples.push(bgm_sample as i16);
    bgm_index = (bgm_index + 1) % bgm_length;
}

```

The secret spice is how good your LLM is at interpreting and generating the persona's psychology versus its own censors and training data. You want the persona to actually embody their character traits, not just parrot safe corporate responses.
I think most of the modern flagship models does a great job at interpreting the persona records.

Behold an AI podcast worthy of perplexity:

[🔊 Audio](/podcast-episode-1.mp3)

[🔊 Audio](/podcast-episode-2.mp3)

Anyway I went outside and touched some grass. 🚶🍃

I have some more fun ideas I'm going to do a part 2 of this exploration with some twists and turns later.
Think <Abbr explanation="Human In The Loop - Yes thats you and or me!">HITL</Abbr>, because this podcast generation was fully automated all the way from database to audio podcast app.

## Questions & Answers



*Questions about the article, answered by the developer.*



**1. You hand-prompted 50 AI personas — at what point did you stop feeling like a character designer and start feeling like a factory?**

Well it was more like "there are so many diverse personas to express" then a workload. I have all these fine people now, they themself can represent 20 other types. And if you mix and match and meld these personas you have almost like Unreals Meta Human templates, just the persona part.

**2. How much of the podcast output is genuinely surprising to you versus predictable from the persona definitions you wrote?**

well its quite different depending what LLM model foundation or open source you use, its how they interprent the persona and the questions. Some models are way to linear in the representation of the persona so you kinda can predict most questions. But Its also a but telling if the yes and no questions is good or not. 

**3. Why fight with WASM and Rust on edge compute for audio mixing instead of just spinning up a cheap server with ffmpeg?**

There are no servers on edge as service! Well there is now days kinda.

**4. You're feeding RSS from sources like a Christian belief site into an atheist artist persona — isn't that basically engineered to produce spicy garbage?**

Yes or atleast some good contrast, it like a job your tasked to perform but would not have taken any part of if you would think a bit more careful about it. Luckily LLMs 2026 are natural positive yes-man so they manage a constructive way out of it.

**5. You teased HITL for part 2 — does that mean the fully automated version produced something that made you go 'okay, a human needs to be in this loop'?**

No more of bringing the Human value together with the AI part. 
Listening to only AI generated slop all day is not as fun as throwing in a couple of humans in there. 
Maybe not call it soul but yes.

**6. This feels like it could be a game system — NPCs with beliefs answering moral questions and reacting based on personality — is that where this is heading for Lunar Soil?**

For Lunar Soils NPC we already have quite a few strong characters. But yes you could model and expose these personas to basically any game world and they could enrich its world with their personalites. 


---

*Canonical URL: https://morgondag.io/news/ai-podcast*