Forum Replies Created

Page 1 of 3
  • Paul Tucci

    Member
    March 16, 2026 at 10:13 am in reply to: Headphone Level Question

    Jesse,

    Another variable in this situation may just be the headphones themselves. Some are more efficient at converting a certain voltage level into loudness. I’ll make an automotive analogy. Some cars can go further on a gallon of gas than others. The ones that have better gas mileage are more efficient at translating the inherent energy in a gallon of fuel into work as measured in miles traveled. Do you both listen on the same model headphones?

    Yet another possibility would be the impedance of the headphones. The low impedance (resistance) types (8-sh OHMS) are a bit more efficient / louder than the high impedance (resistance) types (600-ish OHMS.) That’s just the law. Oddly enough, today dude OHMS would be 237 years old today. Nice timing Jesse.

    @-PT

  • Paul Tucci

    Member
    March 12, 2026 at 9:06 am in reply to: Mix Review

    Patrick,

    I think you may be asking the wrong question here. Instead of asking for mix feedback, asking the collective the question “How does the song make you feel?” might elicit more response because what you’ve written / expressed / and or captured is terrific storytelling. I was drawn in and unexpectedly emotionally manipulated. Your musical intro does not at all foretell the ending, but your disarming charm does get us to the breakdown effectively. So well done!

    That wide-eyed sense of wonder I detect in you music might just be your superpower. I think the French call that “Genois swa.” Not to be confused with Beni swa.

    @-PT

  • Paul Tucci

    Member
    March 7, 2026 at 11:55 am in reply to: Mix Review

    Wait a second! Is this the same Patrick Harber who last posted that sweetest of lullabies?

    Good God man, you’ve got range.

    @-PT

  • Paul Tucci

    Member
    October 30, 2025 at 12:13 pm in reply to: Lining up audio from different recorders

    Jesse,
    I have a question for you first. Did this clap emanate from the vibe players playing position? Or maybe where you were playing from.

    PT

  • Paul Tucci

    Member
    January 25, 2025 at 5:05 pm in reply to: Vocal Delay – to be or not to be

    Drew,

    I might be the first to say it but I’d bet anyone who reads your question will wonder what “pulling down the vocal in the mix” sounds like. A before and after would help me hear exactly what your symptom is and what problem might be. How might you be using the delay? As a plugin on a channel? How much time are you using? Enough to create a slap back or little enough to create comb filtering which literally could cause level to drop? Mix percentage of the delay if it is on the channell? Equal level (50%) mix of a dry and delayed signal is the perfect recipe for comb filtering.

    Gain staging?? Is the output lower with the delay engaged in line but at 0 time. Does the vocal level actually get louder with the delay plug in out?

    So many questions….

    Is it your terminology accurate? … Are you saying the actual vocal level goes down or might you mean that, with the delay effect, the vocal recedes back into the music. Instead of in-your-face-leading-the-charge vocals, they’ve moved back into the mix?

    I ask if it might be heretical to USE an “Elvis” slap back delay sound in your working genres.

    As Waylon Jennings once wrote “I don’t think Hank done this a way.”

    Excellent subject line by the way. Reminded me of an old entry in my to-do list of rap lyrics.

    “You quote Shakespeare, I quote Dylan.

    I like Beastie Boys when they’re illin’ “

    @-PT

  • Paul Tucci

    Member
    March 16, 2026 at 3:41 pm in reply to: Headphone Level Question

    It is not. What kind of self serving knucklehead would do that? I wouldn’t, until maybe next week.

  • Paul Tucci

    Member
    March 12, 2026 at 2:17 pm in reply to: Mix Review

    👨‍🎨

  • Paul Tucci

    Member
    January 9, 2026 at 9:06 am in reply to: Mastering process question

    Dana,

    Thank you for the late night ramble. Keen observations as we have come to expect here.

    I had in fact, thrown a 48dB/oct high pass filter across my L/R buss as the first process in my mastering chain. My ears said 51 Hz was the spot to park the filter and do no damage. I trust my calibrated headphones. I then took that mastered file into RX 11 to have a look if any of its processes would help. A little de-clicking cleaned up some remnants of what I believe to be the artifacts of limiting. Visually, there was certainly some energy below 20Hz but audible? No.

    Nonetheless I persisted and ran multiple passes of the 20Hz and below fade out process to effectively lower the amplitude which caused the LUFS level to change for the worse and sparking the whole conversation.

    Part of my brain says to try doubling up on the steep high-pass filter but set the second one down at 20 or 30 Hz to see if that also causes a slight LUFS level change. Another part of my brain then said “Fuck the meter, trust your ears!” My remaining brain cells tells me that maybe we should tamp down on this #PT4prez thing. I have some unresolved legal issues in Arizona, there’s no sense in drawing attention, ya know? Like Obi-Wan said, “I’d like to avoid any Imperial entanglements.”

    Certainly appreciate the effort and insights. I’ll get back to you after further experimentation.

    @-PT

  • Paul Tucci

    Member
    November 19, 2025 at 3:03 pm in reply to: Lining up audio from different recorders

    Jesse,

    Yes, I’m trying to convey the use of two different instances of the phone audio. One for the ambience of being live in the woods listening to the music so it doesn’t come across bone dry. (Who have I become?) If that stereo phone track needs a little EQ to help it talk with the music better, so be it. Anything that helps the illusion is fine.

    The second use of the phone tracks would be as the driving source of the highly reflective/small room plate sound that I’m suggesting be the “unnatural” effect-y ambience sound. Perhaps wider to exaggerate the space. Tonal seasoning to taste but I’m guessing a low passed version here will do two things. One, let the low end of the close miced instruments retain their low end very distinct and intact, and two, allow you to subtly and magically change the listeners perception of the environment when and if the music hits its stride, becomes greater than the sum of its parts, and transports the listener.

    @-PT

  • Paul Tucci

    Member
    November 18, 2025 at 3:39 pm in reply to: Lining up audio from different recorders

    Jesse,

    P-$ here.

    “I’m a white man, a white man in black socks.

    I wear grey shorts; tank tops and dreadlocks.”

    No, I’m sorry. I think you wanted the other one…

    Regarding your question. Yes, I was suggesting you use the phone mic as an ingredient in your mix of close miced parts. Something to give ambience and support the viewers’ point of view of being in the woods. Slight breeze, rustling leaves, woodland critters crunching the leaves as they scoot through. Speaking of critters, I would avoid the geese motif you used in the rowboat series, that’s so last summer.

    I’ve not tried it but I do believe using a separate copy of the phone mic recording, slightly (100 Hz) high passed as the send to a plate reverb (H3000 Tight and Bright) might could add a sense of hearing reflections off the trees. Some sort of plate verb program meant for percussion that includes a bunch or early reflections and small room ( <1 sec) parameter options.

    Creating an inviting and believable audio setting first can help you hook your audience before the musical story unfolds, much like the cinematic effect of looking out over desert in springtime bloom but hearing what appears to be a rattling sound. Adios cowboy.

  • Paul Tucci

    Member
    November 6, 2025 at 9:01 am in reply to: Lining up audio from different recorders

    @Dana

    What a great deep dive. I will echo the big thanks for your effort and letting us in on your process.

    Jesse, The last 5 minutes of the music led me into a calm and beautiful sleepy time and its just noon my time.

    @-PT

  • Paul Tucci

    Member
    November 4, 2025 at 5:18 pm in reply to: Lining up audio from different recorders

    Jesse,

    You’ve definitely opened a can of worms thinking about managing different arrival times, huh?

    Your initial though of pushing earlier arrival times back in time (guitar amp/ vibes mics) to coincide with the latest arriving one (phone mics) is logical. It’s how we manage delay tower speakers at festival. Hold (delay) the signal feeding the delay tower until the main speakers have made that trip through the slow medium of air traveling at the speed of sound. As Dana pointed out, roughly a foot per millisecond. The two sources of sound can combine constructively if time and polarity-aligned. That translates to greater intelligibility, and an improved S/N ratio (signal to noise.) More dry signal level than ambient signal level.

    I’m really disappointed that the Stonehenge joke in the first reply didn’t land.

    Because dropping and dragging start times in the DAW is so easy you can opt to “pull” the late arrivals back to your self declared start time. That’s what we were suggesting. “Pulling’ the phone mic arrivals back to the vibe overheads. Your close mic’ed guitar would be the first arrival time to show up on the DAW session if you’re both playing the ONE! Just pull all the simultaneously struck “ONES” back to your amp arrival. Voila! Signal alignment may help clean up the sound OR not. If there’s not too much other-than-intended-signal in the microphone, the time misalignment may add spatial character which could be a better choice.

    I believe it was Aristotle that said, and I’m paraphrasing here, “The more you know, the more you realize you don’t.” I just added the worms part.

    -@PT

  • Paul Tucci

    Member
    October 31, 2025 at 4:08 pm in reply to: Lining up audio from different recorders

    Jesse,

    Although I couldn’t spot it, I’m guessing the phone was somewhere on the “desk” on the stand. I was just checking it was visually in front of the band.

    As you compare wave forms of the stereo vibe mics and your stereo phone recording we clearly see that the phone recording, which is further away from the noise source, the vibes, displays the start of the sound recording further to the right on the DAW tracks’ display. We’re basically looking at an X-Y graph. The X axis (horizontal) displays Time, the Y axis (vertical.) We’re looking at your handclap. For a white man in flannel, that is a pretty damn funky lookin ‘clap. It arrives at different times because the physical distance from the source to the microphones are different. The phone mics are further away physically so the sound of the clap (vibrations through the air) arrives later. Later shows up as a smidge to the right. View the screenshot… check? CHECK !

    I’m going all science-y on the explanation here so as to provide backstory for those that may not yet understand tech. I’ve had 4 kids from the local college sound recording program shadowing me at the historic State Theatre this month and found a groove when I went really basic and broke things down to easily digestible nuggets.

    So back to the original question …

    The relative polarity of the phone recording to the vibes mics matters, especially if you slide the (later arriving) phone recording to align with the start time of the first arriving (vibes) signals. Find the first upward peak on the vibes, use that as your target. Find the first peak on the phone recording, if is downward facing..reverse polarity and the slide it over to the left to align with the first arriving positive peak of the vibes mic. Realize this is fucking time traveling magic. With the click and drag of a mouse, we rearrange time. Think about the poor bastards working at Stonehenge this weekend. As we fall behind with a quick digital reset, they’ve got to move all those big boulders one o’clock back.

    I would first attempt signal aligning the phone recording with the mics, add a healthy level of it to see if that’s good au natural ambience, ie gentle wind, leaf crunching, geese, etc.

    Another experiment would be to use the phone recording as the high passed send to a reverb.

    re: milliseconds of delay

    When combing two identical signals together but one is later, Inside of 20ish milliseconds they sound as one but with some tonal consequences. 50 milliseconds apart the two sounds will start to separate into two distinct arrival times. 100 milliseconds apart and you have arrived at Graceland, ie the “Elvis slap back)

    Choose wisely

    @-PT

  • Paul Tucci

    Member
    October 6, 2025 at 4:09 pm in reply to: Thoughts on 88M?

    Bar,

    This is moody and pretty already. I was hoping @JBear would suddenly appear vocally and take it further.

  • Nate,

    Haven’t forgotten you. My curiosity has led me to some new-to-me info that I’m digesting before I nail the answer to your dilemnaa. A couple things seem obvious so far. Those with deep experience in record making don’t pay much attention to the LUFS. It’s like the plumber who can diagnose the problem and fix it in 15 but charges a couple hundred for the knowledge. Better yet, the analogy of Picasso pulling out a blank canvas and making a minimalist masterpiece in seconds because of all his previous work completed, having technique down cold, and then… intent. Those of us stumbling around in LUFS land currently will one day get it. So sayeth the alread -knowers.

    I’m also thinking it’s possible to cheat the loudness numbers to make the Integrated number lower so as not to have the song turned down by the streamers. Because that Integrated number is averaged over the entire song, any EXTENDED sections that are above the threshold level of the noise gate will be included in the overall measurement and thereby “dilute” the time weighting of the loudest sections. Your reference track (Greater Heights) Integrated LUFS number was diluted by 2dB according to my experiment….I deleted the quiet intro and outro to that song and low and behold, the Integrated LUFS level jumped up by 2. This MAY be why you heard your track lower in comparison. If we’re only listening to which is the louder thing we hear and dismissing the quiet bits, we might could get fooled.

    I’m not committing to that quite yet but am enjoying the exploration.

    PT

Page 1 of 3