Rob Ruccia is the epitome of a lifelong musician. After a lengthy career that touched every conceivable corner of the music industry, Rob has maybe made the biggest name for himself in the credits of your favorite plugins and music creation tools.
Fluent in Pro Tools since version three and always striving to be ahead of the tech curve, Rob has become a go-to engineer for some of the biggest names across the entertainment spectrum. His resume includes everyone from legendary jazz musicians like Daryl Jones and Wallace Rooney to YouTubers like Rob Scallon and Andrew Huang.
A few months ago, Rob was kind enough to talk to us from his home base at Uptown Recording in Chicago, where he has been the chief engineer for over 20 years. We talked about the early days of recording tech, the dark side of the music industry, and the one lesson you have to keep learning the longer you live in the studio.
How are you doing on this wonderful Wednesday?
Pretty good. It’s going to be a busy evening for me, I’ve got a bunch of video syncing I have to do.
Fun stuff. Let’s get right into it: for those who don’t know, tell us a little bit about yourself.
Well, I’m a musician first. I became a recording engineer as a way to facilitate getting my music out there. I became a professional touring musician and made a couple of records on major labels, then fell back into the studio and never looked back.
Do you remember when you first fell in love with the music-making process?
Growing up there was a piano in my house, so there was always something to push and make noise with, whether it was a pitch noise or a rhythmic noise. My parents allegedly complained about it, but they also fueled it by never taking the piano away. They eventually got me lessons, and even got me a trombone when I wanted to start to play in the school marching band or orchestra.
Orchestra taught me how to read music, and then I started playing bass. That led me to a metal band, which got me a record deal…and then I realized that it cost way too much money to make records on a record deal, so I learned to do it myself.
What was the first thing you saw in the studio environment that made you think, ‘This is it’?
When I saw moving faders for the first time in maybe 1993-1994. I thought I was tripping for a second, like my mind was playing tricks on me. Now of course I know it was MIDI-based automation, and very archaic compared to what’s available now, but it was one of the things that led to me getting bitten by the tech bug.
It helps that I was already into the computer side of things, and that was where audio was headed at that point. I got in on Pro Tools at version three, which was a lot easier to navigate than a four-track or a cassette-deck recorder, or even analog tape. It was a matter of just finding the fastest way to get my music out. It was a means to an end that led me to this particular end.
Did it purely start as a means to an end, or was there something specific you loved about the process from the beginning?
I fell in love with Pro Tools because it was like the visual side of music that I was never able to access on tape. Before Pro Tools, you couldn’t see the music you were working on. I know it’s “best practice” to primarily listen while you’re working, but I can read waveforms like a language. I look at a screen and see where the ‘words’ are—they don’t look like words to anybody else, except for people who can read them like I can.
It’s a passion that I got into because, first and foremost, I like being able to control things and have them automated. Between majoring in acoustic science and my career as a professional musician, I understood the basic premise of how to treat rooms and make things sound good. I eventually started to realize that if my professional music career wasn’t going to make me the rock star I thought I would be, then maybe I could be the wizard behind the curtain.
How did you end up majoring in acoustic science? Was that always the plan?
Luckily, Columbia also had a pretty well-known local acoustician running the acoustic science program. Since he wasn’t someone I’d be interning for, I decided to study with him. I really wanted to learn about what made sound ‘good’, as well as what made it dangerous and exciting. My final paper was on sonic warfare and the research for that project blew my mind. There are so many ways to destroy stuff with sound. [laughs] But I use it to create.
Had you worked in studios or recording before you went to college?
Yeah. My band in high school did our first recordings on an 8-track reel-to-reel, and then it upgraded from there. I had my own Portastudio, then moved to ADAT recording on tape. At that time, computers and digital workstations were super unreliable and very risky to work with—basically, if you didn’t want to lose your whole project, you worked with tape. That was kind of my path to the future of the recording process.
Was it specifically that element of control and being able to visualize all of it that drew you closer to the studio?
I really wanted to be able to control what I heard. I’d learned that paying other people to try and get the sound I had in my head was a futile effort. At a certain point, I felt like I was just spinning my wheels. I started to say, ‘What if I could do it myself and get it to sound the way I want?’
Eventually people were willing to pay me to get it to sound the way they wanted. Low-budget projects often came with unrealistic expectations at first, but the bigger-budget projects gave me the kind of control I wanted, so they could have the vision in their head come out in the speakers.
That’s really where the control factor comes in. Not to say I’m a control freak or anything—it’s just really nice to be able to control every aspect of what ends up going out. A lot of people listen to my work, so it’s rewarding to see the sound I worked on enjoyed by hundreds of thousands or even millions of people.
“There’s always gonna be the need to have somebody record your stuff and get it out there…After I left the stage, people came to me because they knew that I knew my stuff. I made good-sounding things because I came from a world of good-sounding bands and good-sounding recordings.”
That’s actually a perfect segue into my next question: how did First of October get started?
First of October was born by accident. I did a project with Rob Scallon before he was a YouTuber. When I first met him, he was just the bass player in a band that came through Uptown Recording. He was just out of high school, maybe even still in high school, at the time. From there, he had me master some of his other projects, and then he eventually brought in a couple of his own YouTube channel projects to do here at Uptown.
Rob and Andrew [Huang] just decided they wanted to get together and make a record in a day, and they asked me to helm it. They didn’t tell me that they hadn’t written anything at the time, though. I thought they had a record ready to go and they were giving themselves just one day to record it, which is a challenge I would expect from YouTubers, but no. Nothing was prepared—they just kind of came in and went for it. Sure enough, it became a viral thing that led to a second year and then a third and a fourth, going on to the fifth one now.
Any big plans for the 5th anniversary?
This upcoming year we’re going international again. I can’t spill the beans on it yet, but as an engineer, I am gonna weep when it starts. We’re going to be working in the holy grail of studios.
That’s a really fun teaser. Do you feel like a third member of that band at this point?
Yeah, and they consider me that. That’s why they brought me to Canada instead of getting a local engineer, and that’s why we’re going on another trip this year. The two of them also now have a new series that came out on April 1st called Sonic Boom. We made 12 episodes in a week up in Canada at this studio called Noble Street, which is a really awesome facility.
Oh that sounds very cool, talk a bit more about that.
The show is like a mini First of October. Each episode is a challenge, with ideas flying around at a breakneck pace until they’ve got something. They rely on me to help them make it happen, because I know what to expect from one of their sessions. At the end of the day, when they’re done throwing a thousand things at me, I can still make it sound like a record.
Speaking of First of October, the “Album in a Day” process is so high-octane and frenetic, yet you never struggle to keep up. Were you always naturally adept at navigating a studio environment, or was it a more gradual development to the point where you were able to handle a session like that?
Well, the studio environment has naturally adapted to the needs of the people operating them—they’ve gotten increasingly ergonomic, and things are always within reach.
In the first studio I worked in, I was intimidated by everything. Patch bays, moving faders, microphones on stands with giant counterweights that I had never seen before and wondering why before realizing they’re worth more than my car, that sort of stuff. But always being willing to grow with and adapt to the technology allowed me to get as fast as I am with Pro Tools, which helps facilitate projects like First of October. The days of the 14- to 15-hour studio sessions are kind of gone.
Can you talk a bit about the process of becoming the head engineer at Uptown Recording?
My first paid studio gig as an assistant engineer cutting tape at a studio in Illinois called Sound Video Impressions, they were the first studio to have a 16-track tape machine in the ’70s. That was when Pro Tools and this other platform called Sonic Solutions were the first DAWs that people were using to do professional editing from analog dumping. I ended up diving so deep into Pro Tools that my name is actually in the credits of the software because I’ve been a tester for so long.
I was between 15-16 when I started learning on a four-track, was learning how to automate by 19-20 years old, and then paused at 25 to go on tour with huge bands like Godsmack, Deftones, even Nonpoint, who I still work with now. Then, in about 2002, I fell into Uptown Recording and sort of just made myself chief engineer.
When Uptown was founded, there wasn’t anyone there who had the same intimate knowledge of Pro Tools and more modern recording technologies that I did, nor did anyone have a clientele as large as mine. The studio needed business and a chief engineer, so I just jumped in and never looked back.
Was leaving the stage for the studio always part of your plan?
Yeah, the studio was always part of the plan; the plan was just accelerated when the music industry chewed us up and spit us out, as it does. We were coming from Chicago at the exact time Disturbed got signed, and they’re one of the biggest metal bands that ever broke from Chicago. There were seven other metal bands that got signed at the same time, and they’re the only one that still plays stadium tours. We all got thrown against the wall, and almost all of us fell off the wall.
That’s what got me thinking more about the behind-the-scenes side of the business. There’s always gonna be the need to have somebody record your stuff and get it out there, and the one that does it better is gonna be the one that gets the business. That definitely translated. After I left the stage, people came to me because they knew that I knew my stuff. I made good-sounding things because I came from a world of good-sounding bands and good-sounding recordings.
Thinking about all the artists that you have worked with, are there any particular sessions that stand out?
I’ve worked a lot with a pianist and music director named Robert Irving, he was Miles Davis’s music director for 10 years. The sessions I do with him are awesome, he brings in incredible musicians. There’s a bass player he brought in named Daryl Jones who played with Miles and has played bass for the Rolling Stones for a long time. He’s also brought in some guys from Earth Wind and Fire, one of the original singers from the Emotions, just a huge collection of killer, old school, real musicians. They never need any editing—you get a bunch of takes and suddenly you’ve got 19 great options to pick from.
Those sessions are packed with invaluable information. I remember Wallace Rooney, an amazing horn player who used to work with Miles Davis, as well, gave me all these different tips about mic placement for horns to get the same tone as Miles. That kind of stuff can’t be found in a textbook.
“I like being able to do things that you can’t actually do in reality.”
Were there any moments working with these legendary musicians where you found yourself in a state of shock, or have you had enough experience where you’re able to just get right into the job?
Oh sure, there were definitely moments early on where it was like, ‘Oh my god, these are musicians I’ve looked up to my whole life, and now I’m the guy recording them.’ It’s even more surreal as an engineer when these people are giving me compliments and telling me I’m one of the best they’ve worked with. But mostly, I don’t let it faze me. I think that comes from being in a national touring band and running into every major musician at festivals. You can’t be starstruck, because you’re peers at that point.
In the studio, I’m here to serve them in a way. I go out of my way to make things easy and comfortable for them, even down to making sure their headphones aren’t too loud. Those extra details mean a lot, because seasoned vets will bring up horror stories from other studio spaces they’ve been in. As long as I don’t become the subject of one of those stories, I’ve done my job.
All right, let’s switch gears. You have a lengthy history of testing products for Pro Tools and other companies, but you’ve been a tester for Slate Digital for a very long time too. Do you remember the first Slate Digital product you tested?
It was the ML-1 mic. I got Nonpoint to come in in February 2016 to do their first album on Universal. It was huge for me because they booked the entire month. The version of the ML-1 I used with Elias [Soriano, singer for Nonpoint] was a beta version. There’s an album out there with that early ML-1 on it, all just because I was testing it at the right time.
Eventually, testing the mics got me beta licenses for the entire Virtual Mix Rack, and the rest is history. I love testing hardware—it’s fun to actually physically see the growth in the product.
Do you have a favorite of the virtual mic models?
The 67. When I did Sonic Boom with Andrew and Rob up in Canada, Noble Street had a pair of real 67s. I had never used real 67s before, only the models, and I was really pleasantly surprised when everything sounded like I expected it to. They sounded just like my ML-1s.
Speaking of the mics, I remember hearing you talk about a technique you discovered with the ML-1 where you’d automate switching between different models, depending on what kind of sound you wanted in different parts of the song. Could you talk a little bit more about that?
Yeah, that’s the best accident. I just thought about it in my head: instead of EQ’ing or using things like Soothe or other plugins that are out there to kind of smooth out harsh transients, I figured a darker mic might help. Sure enough, the issue got better, and I didn’t have to automate EQ or anything.
I just got creative with it. I like being able to do things that you can’t actually do in reality. You wouldn’t be able to run into the room and switch mics in-and-out that quickly in real-time; and even if you had two sets of mics on the overheads, they wouldn’t be in the same positions, so you’d have some weird timing shifts or phasing shifts happening.
That’s one of my favorite things about the ML-1: no matter what mic you’re emulating or if you’re setting up a stereo pair, you’re still dealing with individual mics, so understanding how to place them equally and correctly is still important. More and more things are getting removed from the skill set of an engineer because the software can take care of it for you; in this case, I decided to flex my technical know-how and use the software to make the previously impossible, possible.
I do it with horns too: I just did a session where I duplicated all the horn tracks, and then changed the mic on the duplicates and blended the takes between a ribbon mic and a dynamic mic. In the real world, I would never be able to get those in the same or optimal position in front of my horn while keeping everything in phase and time-aligned. It’s just really a helpful tool to experiment with.
With that in mind, do you take a hard stance on either side of the “Analog vs. Digital” debate?
I honestly use whatever sounds best. I often think back to something Chris Lord-Alge said, maybe even in a Slate Academy video, where he said, “Nobody’s gonna die if you turn your treble up to +15.” It’s true. I’m one of those naturally cautious engineers who came from the analog hardware world, so I don’t want to blow up my preamp or have it fail on me because I’ve run some super-hot signal into it. In the virtual world, it literally doesn’t matter. Producers and engineers every day are exploiting the fact that you can have a Neve glowing all day long and it’s not going to die.
Perfect example: when I was working on Sonic Boom, Rob and Andrew had a real RCA 44 from the late 1930s or early 1940s. It was a super expensive mic, and I felt so nervous touching it. I would never take that mic anywhere or put it in front of anyone I didn’t trust. Meanwhile, with an ML-2, you can get something for $150 that sounds just as good as the original when the software’s applied. That’s a big reason I’m so keen to always be on board and a little ahead with the new stuff:. if it sounds just the same as the vintage, what’s the difference?
Are there any pieces of advice you’d offer to newer, up-and-coming engineers who are hoping to develop a portfolio as extensive as yours?
It’s so key as an engineer to have infinite patience. That’s something I find myself struggling with more and more as time goes on. With technology making things so quick and seemingly effortless now, clients expect results so fast, but you have to have the patience to get there. Getting a good result at the end of the day is what the job is all about, but you also don’t want to move so quickly that you’re leaving tons of studio time on the table. As long as you remember to be patient and capitalize as much as possible on the available studio time, you’ll find success in your sessions.
That feels like as good a place to wrap up as any. You already gave us a couple of nice teasers, but is there anything that you can actually preview for us before I let you go?
Sure. I’m getting ready to do more Nonpoint recording. They’re going to be working with Chris Collier, who did the last Korn record, so that’s a pretty big deal. I’ve actually turned him on to a bunch of Slate Digital stuff, he’s been using the ML-1 a lot.
I also got one for Elias from Nonpoint so he doesn’t have to come to the studio to record vocals. That technology was especially key during the COVID lockdowns. Singers could track at home and send me their stems, and I could still use them. It’s nice to work with because while most of the magic is happening in the software, you’re still dealing with hardware. It’s not like an AI coming up with a new Drake and The Weeknd song.