Forum Replies Created
-
Dana Nielsen
AdministratorFebruary 16, 2025 at 12:13 am in reply to: Member Poll: Trends in file delivery when working with remote session musiciansDylan!
What an excellent and thoughtful question.
I get files of all kinds – some great, some not so great haha. Here are a few things that come to mind.
I love it when:
– Files and tracks have been named thoughtfully and thoroughly (no “Audio_01” please! 😂)
– Tracks are laid out top-to-bottom in some kind of organized fashion
– Same goes for track and clip colors – they don’t need to be MY preferred colors, but seeing some type of system helps me to understand the production more quickly and easily.
– Limited options, or no options — just the finished product please 🙂 You aptly mentioned this concept in our Member Meet-up Zoom the other day!
– Waveforms that are nice and loud! But, obv, not clipping. I can’t tell you how many kick and snare and lead vocal tracks I’ve applied +20db of clip gain to. “Bro, your snare track looks like an egg shaker” haha
– If you’re sending a folder of multi-track audio files – as opposed to a Pro Tools session – I love it when people are mindful of their folder names and file names. Such as … Instrument_SongName_mix#_24bit48k_STEM (or MULTI if multitrack file)
– Bonus points if multi-track exports have a 2-digit number upfront to maintain their top-down track layout:
01_Kick_SongName_…
02_Snare-Top_SongName_…
03_Snare-Btm_SongName_…
– And lastly, since you mentioned “uh-the-playing” … I do love excellent playing (and/or editing) that doesn’t require further editing on my part.
Re: Phase … I’m always appreciative and surprised when drums and other things are in phase, tho I don’t expect them to be, and I don’t get riled up if they’re not. After all, phase is a pretty advanced thing to understand and be able to hear and/or adjust properly. And that’s part of MY job to fix that stuff! Plus … if they think their song sounds good with everything out of phase, well that’s an easy “win” for me, as their mixer, haha.
Further resources and fun:
Here’s a brand new blog post and video I made that’s very relevant to this great topic: https://mixprotege.com/2025/02/15/batch-rename-audio-files-with-a-better-finder/
Here’s my Stems Checklist I send to clients who are getting ready to export multitrack audio files: https://dananielsen.com/stems-checklist/
Thanks again, @dylanmandel, for the great topic! ⚡️ I hope to hear additional responses from others here – and your own as well – that I can learn from! 🤘
mixprotege.com
Batch Rename Audio Files with 'A Better Finder'
'A Better Finder Rename' is a powerful MacOS batch renaming tool that has revolutionized my audio file workflow.
-
Dana Nielsen
AdministratorFebruary 9, 2025 at 9:16 pm in reply to: Question about using compression for sustainJezze!
I love that you’re diving deep into compression after watching that Zoom! And you are totally on the right track finding the best settings to lengthen sustain so that held notes don’t die away so quickly.
I think the best strategy (as demonstrated in the Zoom) is to go nuts with the settings in order to hear what each control has to offer. That said, it is easier to hear attack and release settings when applied to brief sounds with sharp transients such as drum hits, which is why I tend to use drums for compression demonstration purposes. Compression settings are harder to “hear” when applied to sustained sounds, so I’m not surprised you’re finding it all a bit more confusing as you delve deeper into sustained notes!
I love your practical example of using compression to lengthen the sustain of a held guitar note so that it doesn’t drop in volume so quickly. This comes up all the time in both the production and mixing phases of a record.
I find it best to imagine it a bit like an algebra equation (barf … I know … bear with me). Or better yet, think of the game show, Jeopardy, where you know “the answer” to the question and just need to work backwards to solve “the question.”
Here’s the same approach described differently:
Rather than wondering “how can I make the back-half of my guitar notes louder?” Ask yourself “how can I make the front-half of my guitar notes quieter?”
Here’s a 5-step approach:
- Find The Quiet: Ok so, park your playhead/cursor in the middle or end of a long sustained guitar note. Choose a position during the note where the level is sustaining nicely but it’s just too dang quiet to cut through the mix. (FUN FACT: you’ve just discovered your threshold! See step 2)
- Set The Threshold: Continue to play that quiet sustained section of the note (or loop it) while you lower the compressor’s threshold. Watch the gain reduction meter as you adjust the threshold and stop lowering the threshold when you start to see the tiniest bit of reduction registering on the GR meter.
- Make It Loud: Continue looping the quiet sustained “below threshold” portion of your note while you turn up the compressor’s make-up gain. Add make-up gain until it’s as loud as you like.
- Enter Launch Codes: Adjust the compressor parameters to the max, just for fun (and to protect your ears now that we’ve added a bunch of makeup gain). Try a ratio of 10:1 or higher; fastest attack, fastest release.
- Let ‘Er Rip: Now position your playhead just before the note and let ‘er rip! With these settings you should see and hear a TON of gain reduction during the initial Attack and Decay of the note. And due to the fast release setting, you should see no further gain reduction by the time the playhead reaches the Sustain and Release part of the note chosen in step 1.
From there, you can continue to adjust the parameters.
- Missing some of the plucky attack as the pick strikes the guitar string? Increase the compressor’s attack setting to let some of that through!
- Are you hearing “pumping” artifacts during the sustain of the note? Increase the compressor’s release setting to smooth things out!
- Want a super-sustained sound? Lower the threshold all the way down to just above the noise floor and turn up the make-up gain to 11!
- Love the way your compressor is controlling the dynamics but it’s starting to sound unnatural? Try a lower ratio! Or a “softer knee” if your compressor has it! Or if your compressor has a “mix” or “dry/wet” knob, blend back in some of the uncompressed signal!
Lemme know if these tips are helpful, homey! Thanks for the great question – keep em comin!
-
Dana Nielsen
AdministratorFebruary 1, 2025 at 10:46 am in reply to: Vocal Delay – to be or not to beHey Drew!
What a great question: To be or not to be – be – be – be… 🎤🎶
@-PT raised some great follow-up questions that might shed light on why delay isn’t working for you as expected (thanks, Paul!), and here are some of my own thoughts and suggestions:
1. The “Pro” Advice Conundrum
There are a million “pros” on YouTube these days, and their advice can be all over the place—some great, some… not. I love that you’re watching, learning, and spotting trends, but more importantly, that you’re testing things out for yourself and letting your ear and intuition guide you. That’s what matters.
Keep in mind, these so-called “pros” aren’t mixing your song. They might not even work in your genre. What works in Top 40 pop or R&B doesn’t necessarily translate to an acoustic, folk, or bluegrass mix. Dig?
2. My Approach to Delay & Reverb
Personally, I always have delay and reverb options ready in every mix session—across every genre. That doesn’t mean I’ll use them, but they’re available if I need them.
For a deeper dive, check out my 5-step approach to FX while mixing:
🔗 Dialing in Delay & Reverb—During vs. After Mixing
When mixing vocals, I usually settle into one of four categories:
1️⃣ With reverb
2️⃣ With delay
3️⃣ With both reverb & delay
4️⃣ Bone friggin’ dry!
And guess what? In acoustic genres, #4 is often my favorite.
3. Why I Love a Bone-Dry Vocal
A completely dry vocal can be a powerful choice, especially in acoustic, folk, or bluegrass styles. Here’s why:
🎙 Intimacy – A dry vocal makes you feel like you’re in the same room as the singer, like they’re singing just for you.
💬 Honesty & Vulnerability – Even if the vocal has been comped and tuned (naturally, of course! 😉 See: Natural Vocal Production), leaving it dry makes it feel raw, exposed, and authentic. No fancy reverbs = nothing to hide.
🎭 Contrast – One of the most underrated tools in mixing is contrast. Dry vs. wet, loud vs. soft, clean vs. distorted. A bone-dry vocal at the start makes any reverb or delay you introduce later way more noticeable and exciting. Think of it like an EDM riser before the bass drops—it builds anticipation and impact.
4. The Right Way to Use Delay Without Losing Clarity
If you do want to use delay without muddying your vocal, here are a few key things to try:
✅ Use an Aux Send & Return – Instead of inserting delay directly on the vocal track (which forces you to blend wet/dry signals), route it through an aux send. That way, your dry vocal stays intact while you control the delay separately. Always set the delay plugin to 100% wet when using this method.
✅ Dial it in on Headphones – Mixing in headphones lets you hear the delay in more detail. It also helps you realize you probably don’t need as much as you think! A good sign you’ve nailed it? When someone listens and says, “You should try a quarter-note delay throw on that high vocal line!” And you just smile because it’s already there—subtly working its magic.
✅ Try a Long Pre-Delay on Reverb – If your vocal gets lost in reverb, try adding a long pre-delay (e.g., 250ms). This creates a gap before the reverb tail kicks in, keeping the dry vocal upfront while still giving you that spacious effect. No pre-delay setting? Just insert a simple delay before the reverb.
5. The Verdict? No Delay = No Problem
So, back to your original question—are you a heretic for dropping vocal delay?
Heck no! There’s no “one-size-fits-all” rule in mixing. If your track sounds better without delay, trust your ears. It’s not about what’s “common” or “expected”—it’s about what works.
Hope this helps, Drew! And hope you don’t mind the long (and delayed 😬) response—mixing is an art, and art deserves some deep dives.
mixprotege.com
Dialing in Delay/Reverb in Mix (During vs After)
Do you feel like it's better to add delay and reverb as you go, or getting a good mix without it and then sprinkling in…
-
Dana Nielsen
AdministratorDecember 31, 2024 at 11:02 pm in reply to: Rigid Audio going out of business saleOoh, what a ridonculous deal, @smoothygroove, thanks so much (as always) for the heads up!!! You da man!!
-
Dana Nielsen
AdministratorDecember 17, 2024 at 11:39 am in reply to: Mixing with only the Midrange!!!???Heck yeah, Jesse!
What you’re describing is exactly how/why I use my beloved little Radio Shack ‘Realistic’ speakers, which are positioned on a shelf on the other side of the room, pushed together to simulate mono sound, and far enough away from me so I’m never listening to those in the “sweet spot”. There literally is NO sweetspot for those, lol. Just “mono” midrange boom-box-style audio … the most important stuff to focus on. And I probably spend 50% or more of my mixing time on those lil guys! 🙀
This concept and process is outlined in my “Chaos to Clarity” pdf. Check it out and lemme know what ya think!
https://mixprotege.com/chaos-to-clarity/
mixprotege.com
Declutter any mix in 4 simple steps!
Bring definition and focus to any mix with my 4-step
-
Super cool, @jlew, thanks so much for sharing this! And FREE, no less! I don’t know about this one and will check it out. Great to have @detective‘s (partially) ringing endorsement, too!
PS – are you using this in addition to your new snazzy TC Electronic Clarity M meter? 🤓💜
-
Hey Drew!
Such a great question, AND a great challenge. Upright bass is a beast that can be difficult to wrangle, especially when you want sub-y low-end out of it.
For me, this process begins with mic placement. (I know … “yawn.” And prob also “too late,” as you’re working with already-recorded tracks, but bear with me. Perhaps these ideas will help your next recording. Plus, I’ll include some mixing tips that’ll help pre-recorded bass as well.
My Best Microphone? My Ear.
I always start by listening to the player in the room. This is my method for recording any instrument. I pretend my ear is the microphone (or ears, plural, if I’m placing a stereo mic). I move my head around like a weirdo while the musician plays, and I find the spots that sound the best to me, and I put the mics there. Works every time.
My Go-To Two Mic Upright Bass Technique
For an upright bass I’m usually looking for a two-mic setup: one large-diaphragm condenser near the f-hole to pick up the deepest, richest, sub-iest sound my ears can locate; and one LDC on the neck facing down(ish) around where the neck meets the body of the instrument to pick up the mid-range definition, which helps define the bass “note” especially on small speakers. As long as those two mics are in-phase with each other I can adjust them during mixing to suit each individual song. A slow ballad might favor the F-hole low-end bloom, whereas an uptempo song might favor the neck mic so that the notes pop out nice and clear, and the tempo doesn’t get weighed down with sluggish low-end.
Bass Bussin’
Regardless the tempo, both of these mics (and sometimes a D.I., too, if available) will get bussed to a mono Aux Input, where I’ll apply any additional EQ and compression. EQ’ing and compressing the combined signal helps avoid EQ phase weirdness btwn mic’s, and helps solidify the instrument’s envelope (attack, decay, sustain, release). As an added bonus it also sums things down to one fader for easier balancing and automation.
Parallel Extremes When Needed
From there – if needed – I might add a couple “parallel” processes by sending the Bass buss (via pre-fade send) to an aggressive compressor Aux Input, and/or an amp sim Aux Input, and/or a subharmonic effect Aux Input (this is similar to what @detective recommended in his helpful post – thanks, Paul!). Sending the bass buss to multiple returns using a “pre-fade” aux send, allows me to turn the original dry Bass buss fader all the way down while I dial in my aggressive parallel effects returns. I always go “aggressive” with parallel FX ’cause otherwise what’s the point? The beauty of parallel is I can add these faders to the dry Bass buss in small increments. Kinda like adding a few dashes of ghost pepper hot sauce to huge pot of chili. A little goes a long way, and its potency is what makes it so useful in a large batch– err, I mean so useful in a dense audio mix.
Hope these tips help, Drew! Feel free to share a sample of your current mix as well as a sample of one of your fav references, and maybe we can get some more ideas flying. Good luck! 🎚️⚡️
-
Oh, one last thing about adding low-end EQ to stuff like bass and kicks …. try reducing the frequency above where your boost is at. This often helps me get a more focused sound and eliminates mud. Like, try a low shelf boost at 100Hz in tandem with a significant bellcurve notch reduction at say 200Hz, or wherever the mud or resonance is building up.
Hi pass is also your friend for low-end, backwards as that sounds. i.e., try boosting 60Hz with a steep hi pass filter at 40Hz for example. You’ll be able to push that 60Hz harder without the sub frequencies below 40 eating up all your mix headroom and destroying your speakers. 🤘
-
-
Dana Nielsen
AdministratorFebruary 18, 2025 at 8:19 pm in reply to: Member Poll: Trends in file delivery when working with remote session musicians😍
Oh just remembered another “I love it when….”. This one’s on my stems checklist on the link above:
I love it when mono sources come as mono files (i.e. bass DI), and stereo sources come as stereo interleaved files (i.e. Poly Synth DI).
Using my ears and meters and mono button and phase flip to determine that this “stereo” kick track is actually mono, then splitting the stereo track into dual-mono so that I can delete the original stereo track plus the left side of the newly-created multi-mono tracks … welll … it’s a giant waste of time. Heck, it’s a waste of time to even write about that tedious, mundane, and unnecessary task, lolol.
Ok. Rant: over!
-
Dana Nielsen
AdministratorJanuary 7, 2025 at 2:14 pm in reply to: troubleshooting – track on spotify sounding quieter than expectedWoo-wooooo! 🎉 So glad to help, man!
-
Dana Nielsen
AdministratorDecember 30, 2024 at 11:20 pm in reply to: troubleshooting – track on spotify sounding quieter than expectedHey, @nategomoon! Thanks man – holidays have been good; hope yours have been too!
The only reason for -2db or -1db true peak is to protect the sound quality once your master is transcoded to one of the shitty– err, I mean “lossy”– compression formats we all listen to most of the time. This is especially noticeable when listening on the free or ad-supported tiers of various DSPs, as well as all social media feeds.
Lemme explain… That super loud -0.01 dBTP master will sound awesome when played back in a lossless format like Wav or FLAC, and it’ll probably also sound pretty damn good as a high quality mp3 or Ogg Vorbis on the paid versions of Spotify and Youtube. But if you’re listening on a free tier (or on any social media platform) you ain’t gettin top shelf audio! They serve freeloaders the “well” spirits … a weak 2.5kbps shot of rum mixed with flat lukewarm Diet-Codec served in a Variable Bit Rate Dixie Cup… with no ice!
Suffice it to say, super loud mixes mastered to the the bleeding edge of -0.01dBTP often sound distorted once transcoded to lossy formats with lots of data compression.
One indispensable tool in my plug-in arsenal that helps me get ahead of this guessing game is ADPTR Streamliner. (And, wow– it’s on sale right now for $39! That’s a steal.) It’s a really cool plugin that let’s you hear in real-time what your song will sound like on ANY tier of ANY platform — it’s like hearing into the future!
It really is eye-opening (ear-opening?) to poke around, auditioning all the different codecs. And you can even click the “Artifacts” button to hear ONLY the residual artifacts. If you’re a fan of submerging your head underwater while crinkling aluminum foil as close to your eardrums as possible you are going to LOVE the Artifacts button! 🔊🥴
plugin-alliance.com
NEW VERSION 1.1 - Codec auditioning, automatic level matching, and state-of-the-art loudness and dynamics metering to get the perfect master for all major streaming services.
-
Dana Nielsen
AdministratorDecember 17, 2024 at 10:37 pm in reply to: Mixing with only the Midrange!!!???YESSS! You’re en fuego, homey!! Love that you’re diving deep into this stuff – and crushing it!
Lookin fwd to hearing your next masterpiece! ⚡️🎚️🥰
-
Dana Nielsen
AdministratorDecember 17, 2024 at 10:35 pm in reply to: Mixing with only the Midrange!!!???This is so great, Paul!!! Excellent points (and pretzels 🥨🤤)!
-
😂😂😂
You’re too kind and too funny, Paul.
No need to list me in any particular order, @drewb – It’s a team effort here! 😂 🥰
-
Right on, Drew! Happy to help!
And, while I do use SubSynth quite often (love that plugin), I know what you mean … it can quickly add the wrong vibe to a natural-sounding acoustic record. In those cases I might try only using the highest frequency knob of the SubSynth (50-60hz if memory serves) as that tends to sound less like an octaver. Or I might try waves R-Bass as an insert on the bass bus, which works a bit differently – more psychoacoustic than synthetic octave.
Happy bass-boosting! 🔊


Social Media