MixMasterAI logoMixMasterAI

About/Collins Asein

CA

Collins Asein

Founder of MixMasterAI. Music producer, audio engineer, and AI music researcher. Built the mastering pipeline used to process every track that runs through the platform.

FounderAudio engineerAI music researcherSoftware engineer

Background

How I got here

I started recording and producing music in the 2010s — typical bedroom-producer path: free DAW, cheap interface, a lot of trial and error. The thing that hooked me wasn't making the song. It was the post-production: figuring out why my tracks sounded thin compared to commercial releases, learning what mastering engineers were actually doing, and digging into the math behind LUFS, true peak, and multiband compression.

When AI music generators became real around 2022 — first the early Suno builds, then Udio, then Mureka and the rest — I noticed the same problem at scale. Generated tracks sounded great in the player and fell apart on Spotify. They were pre-limited, peaks shaved off, with the metallic sheen that comes from upsampling neural codecs back to audio. Mastering an AI track is a different problem from mastering a recorded one, and there wasn't a free tool that handled it.

MixMasterAI is the tool I built to solve that. It runs the same DSP chain I'd run by hand in a DAW — reference matching, genre-aware EQ, multiband compression, true-peak limiting, LUFS normalization — but applied automatically and tuned for the artifacts AI generators leave behind. Every page on this site is something I personally use or built for someone who needed it.

Areas of expertise

What I work on

Mastering & DSP

Designed the multiband compression, true-peak limiting, and LUFS-normalization chain that processes every track on MixMasterAI. Implementation follows ITU-R BS.1770-4 and EBU R128 measurement standards.

AI music research

Hands-on testing of every major AI music platform since the Suno v1 alpha — Suno, Udio, Mureka, ElevenLabs Music, AIVA, Stable Audio, Boomy, Sonauto, Riffusion, MusicFX, Minimax, Musicgen, Loudly, Beatoven.ai, Soundful, Soundraw, Mubert, and Ecrett.

Streaming-platform specs

Worked through the audio specs for Spotify (-14 LUFS), Apple Music (-16 LUFS), YouTube (-14 LUFS), Tidal (-14 LUFS), TikTok, SoundCloud, EBU R128 broadcast, ACX/Audible audiobook delivery, and the platform-specific quirks that affect how a master plays back on each.

Software engineering

Built the entire MixMasterAI stack — Next.js frontend, Python FastAPI backend with NumPy/SciPy DSP, ffmpeg.wasm in-browser tooling, and the mastering pipeline that runs every job.

Editorial standards

How I review and update content

Every guide and prompt page on MixMasterAI gets a manual review before it publishes. AI tools change quickly — what worked in Suno v3 may not work in v5; ElevenLabs Music launched in April 2026 and changed the prompt landscape — so review dates appear at the bottom of every article. If a date is older than 90 days, that page is in the queue for an update.

Technical claims (LUFS targets, codec specs, sample rates) cite the upstream standard whenever there is one — ITU-R BS.1770-4 for loudness measurement, EBU R128 for broadcast delivery, ACX for Audible, the platform's own published spec for streaming targets. If a claim doesn't have an upstream standard, it comes from direct testing of the tool in question, and the page says so.

Mistakes happen. If you spot one — a wrong LUFS target, an outdated AI tool tip, a prompt that no longer works — email hello@mixmasterai.co and I'll fix it within a few days.

Try the platform

Drop a track and get a streaming-ready master back in under 60 seconds. The same mastering pipeline I'd run by hand, applied automatically.

Open the mastering tool →
Master your track free