13 minute read

INSIDE THE MAGIC: A BEHINDTHE-SCENES DISCUSSION WITH KRK SYSTEMS

Advertisement

We recently had the absolute pleasure of speaking with Gibson Brands Pro Audio Director of Engineering and Sr. Manager of Product Development, Craig Hockenberry, specifically about the KRK lineup. We discussed the company’s product development process, tips for monitoring your mixes, the lifecycle of a product from conception to final production assembly, and much more. If you’re a content creator, this is incredibly interesting stuff from one of the preeminent brands on the market.

So, without further ado, please join us for a fascinating peek behind the curtain of one of our favorite pro audio brands. Special thanks to Mr. Hockenberry for being so generous with his time and expertise.

Could we get some insight from you on what the product development process looks like at KRK? A lot of artists are probably unaware of what actually goes into making a product in the MI or pro audio space.

Our R&D starts with speaking to our users.

We are fortunate to have a really cool network available to us, the KRK Kreator Alliance, which is comprised of many industry-leading professionals. We regularly call upon them to get their input on product concepts that we may have and feedback on solutions that are already on the market, as well as suggestions on marketing efforts we can take to raise awareness down the road.

In addition to the Kreator Alliance, we’ll reach out to content creators, professional musicians and engineers, our commercial partners, and our sales and marketing teams. We’ll shoot some ideas out, gather the feedback, and then go from there.

The next step is to create a “product requirements” document. Based on user feedback and what we learned from market analysis, we outline product functions and what the product should look and sound like. Even target technical specifications are captured. From there, the document goes through a stage gate process, which, once approved by stakeholders, results in a separate “technical requirements” document. The R&D team then evaluates both documents and turns them into an architectural solution, which is what is used to develop product throughout the remainder of the process.

Once approved to begin the development process, the solution goes through several different stage gates. The team will start building preliminary schematic diagrams, the acoustic design, the mechanical design, and software, if needed. We have an amazing multidisciplinary R&D team here with mechanical, electrical, acoustics, and software engineers. Each of them have specific tasks within a project, which is outlined in the technical requirements document.

When preliminary design is complete, the product then goes through several stages of prototyping. If the first prototype works out—great, then the project advances to the next stage. If not, the problem gets reported, and we iterate prototypes until the issues are corrected.

At the prototyping stage, I assume you’re testing out things like materials and physical builds. What would be some of the reasons that a product wouldn’t get approved at that point?

We design our own drivers from the ground up, everything from the magnetics and voice coils to cone and dome materials and designs. Let’s say we are prototyping a brand-new tweeter design and we tool up a new dome—we’ll perform a mechanical incoming inspection to ensure it meets the specs of the model that we designed. Then, we’ll also put it through detailed acoustics testing, including using our scanning vibrometer that measures and visualizes modal responses of the diaphragm. If something doesn’t match our criteria, then we’ll have to go back to the drawing board. Perhaps not to fully redesign, but maybe change the design, material, or treatment to meet the criteria. So that would be a case for when and how we’d create another prototype.

After the prototyping stage, what’s the development from there on?

The product then goes to what we call an “alpha” stage, which is basically an advanced prototype level, but now we’re hard tooling. Since most of our designs require new tooling for custom drivers and plastic or aluminum baffles or faceplates, we’ll hard tool and put it through iterations of testing. Once all tooling is approved, we advance the project to the “beta stage,” where we have additional physical units for more complete in-house and in-field testing. At this stage, we are sending the product out to our Kreator Alliance and a team of trusted beta testers, which will put it through its paces in the field and provide feedback. If there is an issue that was missed in the other stages, we will correct it and move on to pilot production.

Pilot production is performed to verify that all production and operating procedures are good to go. It’s more of a factory-level thing, so at this point, the development is considered complete.

Is that sort of like a test run?

Yes, like a test run, exactly.

And then we’ll get those units back again to triple-check and make sure they’re OK before we pull the trigger to go into mass production.

Once the product is in production, we’re done with the development side of it. The R&D team will continue to support the manufacturing team if needed and issue ECR/ECO’s if further improvements are necessary.

What would a common timeline be like?

From say, idea generation to final assembly and shipping to customers and retail?

It really depends on the complexity and innovation that the product requires. If we have to develop a completely new, bespoke material, for instance, that could take up to two years.

Oh wow. OK.

But I would say typical design time is a year to 18 months. If we were to use a brand-new technology, it could take up to two years before we get into production.

That’s really cool information, because we talk to a lot of manufacturers, but we don’t really convey what actually goes into making a product. And I don’t think many artists really have a good grasp on the number of steps, teams, and departments that are involved, or how much testing, refinement, and quality assurance happens before anything hits a store shelf.

There’s a team of engineers and product managers on these projects, and they’re working diligently to test the products and make sure they’re reliable. We go through very extensive extended-life testing for reliability, and we put these products through their paces to make sure they’re not going to fail in the field.

I’d love to kind of switch focus now to the artist side of things. So once the product is actually on the market, obviously it’s intended to help people make music and create content. One of the things we want to talk to our audience about is getting into the content creation game. Since you specialize in monitoring, both with headphones and speakers, let’s say I’m a musician or a content creator looking to get started for the first time and really corral all the different equipment I need. What would you suggest for a first timer to get for monitoring playback and audio for their content creation? Would you go headphones first or would you go speakers first?

I would probably do a combination of both. I wouldn’t stick to just one; the reason being that you need to create in a very quiet environment. I know a lot of creators work from home and they need a quiet environment. Many times, if they have a family or roommates, they’ll need good ‘phones to do that. However, there are some downfalls to headphones—they don’t really create the ambience that speakers can.

Speakers will provide a better stereo image. They’re more realistic sounding than headphones because we hear sound sources out of both ears. With speakers, you can hear multiple sources out of both ears, whereas headphones only give you one source in each ear. And now that we’re speaking of the head, everybody has their own head-related transfer function that plays into the way they hear. There are expensive headphones on the market that try to simulate that head-related transfer function, but it’s one algorithm. It’s not personalized, and everybody hears differently. The way the sound diffracts around your nose and head is different than the way the sound diffracts around my head, you know?

Right.

So, if you want true accuracy, speakers are really the way to go. However, headphones are helpful for dialing into a detailed view of the sound mix, say vocals or something like that. In that case, due to their isolation and separation, headphones might be the way to go because they’ll help you concentrate on a very particular part of your music or content.

In terms of speakers, I know there’s often some confusion when setting up studio monitors. Specifically, is there a certain distance they need to be away from walls and reflective surfaces? Do you toe-in? Do you recommend that? How do you do sound staging? Any sort of tips you can give for someone setting up a home studio for the first time?

A good place to start is our KRK Audio Tools app, which is available for iOS and is currently being optimized for the latest Android software. It’s free to download and gets you in a very good starting space with your setup. I do recommend toe-in, and it’s a 30-degree toe-in on each side. Within our app, you can set your mobile device on top of the speaker and turn it until it alerts you that you’re exactly at 30 degrees.

OK. Cool.

A good rule of thumb is to set the monitors about a meter away to get your best stereo imaging. Point the tweeter directly at your ears, not at your eyes or into your body. That’s important, especially if a monitor is more directional. But that’s a good starting spot and you’ll get very good imaging that way. As far as the room goes, we have an automatic room correction algorithm on our GoAux 4 portable solution that’s very basic, and adjusts for low frequency boundary conditions, but that’s just a starting point.

Right.

It’s also extremely important to treat the room for acoustics—this is always better than adding some sort of in-room correction. As far as boundary conditions off your wall, depending on the woofer size and your speaker size, you should try to be at least three feet off your wall. As a general rule of thumb, try to be equally distant between the two reflecting walls on either side.

Since we’re talking about mobile monitoring and production, the KRK GoAux speakers are perfect for that application because they come with stands that enable you to point the tweeters directly at your ears.

They also come with a handy little carrying bag, which is nice because you can easily bring them to a session. If you’re recording someone at their project studio and you want to bring your own gear, it’s super easy to carry the GoAux along with you.

If we can circle back to headphones for a minute; I’m interested in how headphones are developed by a company that’s primarily known for studio monitors. In the conceptual phase, when you’re making a new headphone product, do you visualize them as a scaled-down version of a studio monitor? Or is your approach completely separate from how monitors function?

It’s separate, because the acoustic field provided by headphones versus monitors is different. The overall thought is to make the headphones as accurate as possible. Other than that, the design and the thought processes behind the design are completely different because they’re a completely different acoustic beast.

One of the questions that some newer users might have is: well, I see the specs on the headphone goes from 20Hz to 20kHz and that seems to be the full spectrum range of human hearing (and even beyond). Why? I’m looking at studio monitors and they don’t go down to 20Hz. Can you maybe explain the logistical reasons why specs on a headphone might be different than specs on a physical speaker?

It has to do with proximity to your ear. Sound is an air particle velocity being moved in a wave form. It’s not like a sine wave, which is an electrical wave form that people usually see and are familiar with. Sound is a longitudinal pressure wave that has to move a certain distance. In order for you to have a low frequency pressure wave move a lot of air volume at farther distances, you have to have a very large radiating area. So that’s why you’ll see pro audio subwoofers get down to 20 Hz, they’re larger and able to move more air.

For headphones, it’s very close proximity to your ear, so you’re able to do that with a much smaller driver because you don’t have to move such massive amounts of air to get there. It’s just the air volume of your ear canal.

If I’m a home recording artist working on an album or a song or whatever, and I want to implement a subwoofer, how do I start? I know directionality and placement is fairly important for my main studio monitors, but when it comes to subs, is it as important where that goes in the room or how that’s positioned or angled? Or is it maybe a little less just because the waves are so much bigger?

It’s not as important; subwoofers are typically omnidirectional. However, because you hear from both your ears, you can detect directionality and where the subwoofer is positioned in the room. So just because the sub is omnidirectional, that doesn’t mean you won’t know its location, and if it is sitting off to the right side, you’re going to hear the sounds coming from the right side, especially with the initial transient response. Now, once it blooms and fills the room, it’s kind of all-encompassing, but for that initial kick drum hit, you’ll be able to tell it’s coming from that area. So, at least some placement consideration is important.

Typically, you’ll want it somewhere acoustically centered between your monitors. That could be two subwoofers that sum acoustically to the center. Or, if you have one subwoofer, then it’s typically placed in the center of your room between the left and right monitors.

My final question -- are there any tips you can give to someone just starting out assembling all of their studio pieces or content production hardware? Where would you begin? To look at studio monitors, what are some of the most important features that people should initially look for when shopping?

I would say that the most important thing is to get a monitor that’s suitable for your setup. If you have a very small room and setup, you’ll want to keep to the more compact side of the monitoring situation so they can be comfortably placed in the room where you need them. You don’t want 10-inch, three-way monitors in a small space, it’s just not going to work. So, get something suitable for your size—that’s probably No. 1. And then No. 2 is to purchase from a brand you’re very familiar with and has a good recommendation from others in the industry. Most monitors these days have good frequency response and phase response. Obviously, I’m going to recommend KRK to everybody, but as long as you are used to the monitor that you choose in your room, you really can’t go wrong.

Much like having a high-quality webcam with good lighting in your room, your audio quality also has an immense impact on your brand as a streamer.

And one of the easiest, and most cost efficient ways to improve your audio quality is by investing in an audio interface.

If you’ve done any research into what you need for a streaming setup that gives you modularity for the long term, it’s likely you’ve also read about audio interfaces.

But for a device that’s commonly associated with musicians and producers, what benefit does an audio interface provide to streamers?

What is an Audio Interface?

In order to understand the various benefits that an audio interface provides, it’s important to know what it actually does.

Put simply, an audio interface converts an analog signal into a digital format that can be read by your computer.

This is what allows musicians to record vocals or instruments in a digital environment. Or in this case, streamers to use analog equipment, like microphones that use an XLR connection, in software like OBS.

An audio interface effectively bypasses your computer’s internal sound card. And any audio processing now occurs on the interface as opposed to the internal system, which segues perfectly into one of the most important benefits of using an audio interface, latency-free monitoring.

These are the main benefits to using an audio interface:

1. Latency-Free Monitoring

Monitoring is the act of listening back to, and analyzing any audio that you’re either putting out or receiving.

And any added latency can make this process incredibly frustrating, which is typically a symptom of using internal sound cards or builtin drivers native to your system while recording or streaming live.

For example, a singer will need to hear themselves in their own headphones during the recording process. If there is any delay (latency) to their signal during this process, it will be difficult to sing on tempo.

And in a situation like livestreaming, you don’t have post production to clean up any mistakes so having real-time feedback is crucial to delivering an exceptional performance.

Put simply, if you’re streaming a video game to thousands of viewers, you’re going to want to hear your mic signal, and any other audio sources without any delay.

Latency-free monitoring means:

• No delay on what you hear on your end (this is particularly critical with vocals)

• You’ll get a good sense of what the audience is hearing

• You’ll have full control of final output of all audio sources in your headphones and can adjust the level to what’s comfortable for you

• Saves processing power on your computer since it’s occurring through the device instead of internally

2.

On its own, a microphone produces a very low, essentially inaudible signal that’s referred to as a mic level signal.

The typical mic level signal is anywhere between -60dB and -40dB.

So in order for a listener to hear this, the signal needs to be amplified to a nominal level.

And this is precisely what a preamp does.

A preamp, or often times referred to as mic pres, simply applies gain to a signal to bring it up to a nominal level for recording or streaming.

And one of the main benefits of an audio interface is that they will be built with higherquality preamps (like the Mackie Onyx Preamps) which produce a clean signal with low noise.

A preamp can affect your audio signal in several ways:

• How “clean” the amplified signal is

• The amount of noise introduced in the signal once it’s been raised (often referred to as “hiss”)

• The tone, or “character” of your signal. Also referred to as warmth.

• How much gain you can actually apply

3.

Let’s get one thing out of the way first — are XLR microphones inherently better than USB microphones?

Well, the answer is both yes and no.

In fact, in a lot of cases the microphones

This article is from: