Back to Blog
9 min read
How AI Wrote a 19-Track Album (And What I Learned)

How AI Wrote a 19-Track Album (And What I Learned)

I used AI music tools to compose a narrative album about Mansa Musa, the richest man in history. The 10-track project included a flagship cinematic video. Here is what the process taught me about AI as a creative partner.

AI musicSunoMansa Musacreative AI

In February 2026, I released "A King's Gamble," a cinematic visual built around a narrative album about Mansa Musa, the 14th-century emperor of the Mali Empire and widely considered the richest person in human history. The YouTube description reads: "This is not a music video in the traditional sense. It is a cinematic interpretation of legacy."

That project was composed entirely using AI music tools. And the story of how it came together reveals something important about where AI creative tools actually stand, what they can do, and what they still cannot.

The Mansa Musa Connection

Why Mansa Musa? Because his story is one of the most remarkable in human history, and most people have never heard it told properly.

Mansa Musa ruled the Mali Empire from approximately 1312 to 1337. His wealth was so vast that when he made his pilgrimage to Mecca in 1324, he gave away so much gold along the route that he single-handedly destabilized the economies of the cities he passed through. The price of gold in Cairo crashed and did not recover for over a decade because of one man's generosity.

That story connects directly to things I care about deeply: African history, the power of generosity, and the idea that wealth without purpose is meaningless. It also connects to one of my core goals: making giving as culturally powerful and addictive as social media.

So when I started experimenting with AI music composition, Mansa Musa was the natural subject for my first serious narrative project.

The Suno Workflow

I use Suno AI for music composition. Over the past two years, I have created over 2,500 songs on the platform. That is not a typo. Twenty-five hundred. Most of them are experiments, tests, and iterations. But within that volume are dozens of tracks that I consider genuinely good.

Suno works by taking text input, which can include lyrics, style descriptions, and structural metatags, and generating a complete audio track. You can specify genre, mood, tempo, instrumentation, and vocal style. The AI composes the music, generates the vocals, and produces a finished track.

The Mansa Musa album started with research. I spent time studying the historical record, reading about the pilgrimage, the Mali Empire's trade networks, the cultural context of 14th-century West Africa. Then I wrote lyrical concepts for each track, defining the narrative arc of the album.

The album covers Musa's rise, his decision to undertake the pilgrimage, the journey itself, the impact on the cities he visited, and the legacy he left behind. Each track corresponds to a chapter in the story. The genres shift to match the narrative: from West African rhythms in the opening tracks to more contemplative pieces as the story deepens.

"It takes me five minutes to make a song, a custom song that is going to relate to the specific thing," I explained during a planning session. That speed is what makes AI music composition fundamentally different from traditional production. A traditional album takes months of studio time, session musicians, mixing, and mastering. An AI-composed album takes days of creative direction and iteration.

The Creative Partnership

Here is what people get wrong about AI music. They hear "AI composed" and assume the human did nothing. That is like saying a director did nothing because the camera filmed the movie.

For every track on the Mansa Musa album, I made creative decisions at every stage. I chose the genre and mood. I wrote or directed the lyrical content. I specified the emotional arc. I evaluated multiple generations and selected the best takes. I sequenced the tracks to create narrative flow.

What I did not do is play an instrument, hire a vocalist, book studio time, or spend months in post-production. The mechanical aspects of music production, the parts that require expensive equipment and technical expertise, were handled by AI. The creative vision, the storytelling, the curation, those remained mine.

This mirrors exactly how I use AI in business coaching. The AI handles the mechanical work. The human provides the vision, the context, and the quality judgment. Neither one alone produces the result. Together, they produce something that neither could achieve independently.

The Visual Component

"A King's Gamble" was not just an album release. It was a visual experience. The flagship video combined AI-generated imagery with carefully composed music to create what I described as a cinematic interpretation.

The video production used a combination of tools. Image generation for the visual frames. Ken Burns-style effects for camera movement on stills. Audio synchronization to lock the visuals to the musical beats. The entire pipeline was built using the same approach I use for everything: identify the task, find the right AI tool, iterate until the quality is where it needs to be.

This was not a casual project. The production involved careful storyboarding, frame-by-frame composition decisions, and multiple rounds of quality assurance. AI generated the raw materials, but the editorial vision was entirely human.

The Five-Track Game Soundtrack

The Mansa Musa album was the first major music project. The second was the Pharmageddon Original Soundtrack, five tracks composed specifically for the pharmacy simulation game.

Each track was designed for a specific gameplay context. "The Honda Civic" is lo-fi chill for the title screen, setting a relaxed tone before the chaos begins. "Wildin" is an instrumental for the early shifts, steady enough to work in the background without distracting. "Hard Mode" increases the energy for later shifts as the difficulty ramps up. "Code Red (Code Ozempic)" is the rush hour banger, high energy to match the peak chaos. "Pharmageddon" is the flagship track, used for the finale and victory sequences.

Writing game music requires different constraints than album music. The tracks need to loop cleanly. They need to support the gameplay without overwhelming it. They need to be energetic enough to enhance the experience but subtle enough to not become annoying after repeated plays. These constraints shaped the composition prompts I gave to Suno.

The soundtrack is being distributed on Spotify through DistroKid, the same platform we used for the Mansa Musa album. This creates a secondary discovery path: someone who finds the game can find the music on Spotify, and someone who finds the music on Spotify can find the game.

The Full Catalog

Beyond the album projects, I maintain a catalog of 43 analyzed tracks that span multiple genres and use cases. Instrumentals include everything from cinematic neoclassical ("Patient Room Emotional," rated 9 out of 10 for dark emotional VR) to smooth jazz ("all in there") to lo-fi hip-hop ("Clean Room Training Ambient"). Vocal tracks include conscious hip-hop ("A Prompt's Power Song"), motivational anthems ("Go Get It Now"), and reflective pieces.

Each track is cataloged with genre, mood, tempo, and use-case ratings. This is not a music collection. It is a production library. When I need background music for a video, a soundtrack for a VR experience, or intro music for a live stream, I pull from the catalog based on the specific requirements.

The YouTube channel features several of these tracks as standalone releases: "ROI From AI" (the soulful business automation anthem), "Did It Get Done Though" (East Coast motivational hip-hop), "Hungry for It" (a mindset anthem), "Put That Work In" (a grind anthem), "Truth Don't Burn" (soulful resistance), "Atomic" (inspired by James Clear's Atomic Habits), "Word Is Bond" (inspired by The Four Agreements), and "Frog Season" (inspired by Brian Tracy's Eat That Frog).

Every one of these tracks connects back to a book, a concept, or a principle that I actually teach and live by. The music is not random creative output. It is content that reinforces the intellectual and philosophical framework of everything else I do.

What I Learned

Five lessons from composing over 2,500 AI-generated songs.

Lesson 1: Volume is the prerequisite for quality. You cannot compose one song and expect it to be great. The AI needs iteration, and you need practice directing it. The first hundred songs taught me what works and what does not. The next hundred refined my approach. By the five-hundredth song, I could consistently produce tracks that I was proud of.

Lesson 2: Genre specificity matters enormously. Telling Suno "make a hip-hop song" produces generic output. Telling it "East Coast rap, boom-bap drums, soulful piano sample, confident male vocal, 95 BPM, conscious lyrics about financial literacy" produces something specific and usable. The more precise the creative direction, the better the result.

Lesson 3: AI music is best for projects with clear creative vision. If you know what you want, AI can produce it remarkably well. If you are hoping AI will figure out what you want, the results will be scattered and unsatisfying. The vision has to come from the human. The execution can come from the machine.

Lesson 4: The real cost is curation, not creation. Generating a song takes minutes. Evaluating whether it is good enough takes longer. Deciding which of five versions best serves the narrative takes longer still. The creative bottleneck shifted from production to selection.

Lesson 5: Music is content, and content compounds. A song is a YouTube video. A YouTube video is a Spotify release. A Spotify release is a game soundtrack. A game soundtrack is a portfolio piece. A portfolio piece is a conversation starter. Each format feeds the others. The compounding effect of a single creative asset across multiple platforms is the real return on investment.

The Democratization Argument

The Mansa Musa album would have cost tens of thousands of dollars to produce traditionally. Studio time, session musicians, mixing engineers, mastering engineers, vocalists. The total would easily reach $30,000 to $50,000 for a professional 10-track album.

I produced it for the cost of a Suno subscription.

This is what democratization actually looks like. Not "AI makes music" as an abstract concept. A pharmacist from Allentown, Pennsylvania, produced a narrative album about a 14th-century African emperor, complete with cinematic visuals, and distributed it globally through Spotify. That sentence would have been absurd five years ago.

The quality debate is real. AI-composed music is not indistinguishable from human-composed music. There are tells. There are limitations. But for the use cases that matter to me, portfolio content, game soundtracks, educational anthems, and narrative projects, the quality is more than sufficient. And it will only improve.

The question is not whether AI music will replace human musicians. It will not. The question is whether people with creative vision but no musical training will be able to produce music that serves their projects. The answer is already yes.

JB

Dr. Jeff Bullock, PharmD

CEO of PRISM AI Consultants. PharmD from Xavier University of Louisiana. 18 years at CVS Health, now building AI systems that run real businesses. 749+ coaching sessions delivered, 34 autonomous agents in production.

Want to go deeper?

Book a call to discuss how AI can work for your specific business.