Quantcast
Channel: Analog
Viewing all 585 articles
Browse latest View live

DAC Essentials: What’s with all this glitch-ing?

$
0
0

When designing with a digital-to-analog converter (DAC), you expect the output to move from one value to the next monotonically, but real circuits don’t always behave that way. It’s not uncommon to see overshooting or undershooting, quantified as glitch impulse, across certain code ranges. These impulses can appear in one of two forms, shown below in Figure 1.

 

Figure 1: DAC glitch behaviors

Figure 1a shows a glitch that produces two regions of code transition error, which is common in R-2R precision DACs. Figure 1b shows a single-lobe glitch-impulse, which is more common in string DAC topology. Glitch impulse is quantified as a measure of energy which is commonly specified as nano-Volts-sec (nV-s).

But before we can talk about sources of DAC glitch, we must first define the term “major-carry transition.”  A major-carry transition is a single-code transition that causes a most significant bit (MSB) to change because of the lower bits (LSBs) transitioning. Binary code transitions of 0111 to 1000 or 1000 to 0111 are examples of a major-carry transition. Think of it as an inversion of the majority of the switches. This is where glitch-ing is most common.

Two areas of concern are switching synchronization and the switch charge transfer, as multiple switches are simultaneously triggered. For the sake of argument, let’s look at an R2R string DAC that’s designed to rely on switches that are synchronized during code transitions, shown below in Figure 2.

 

Figure 2: DAC major carry transition

 As we all know, there’s no such thing as perfect synchronization, and any variance in the switching will lead to a brief period where all switches are either switched high or low, causing the DAC’s output to error. Recovery occurs and, as a result, a switch charge will create a lobe in the opposite direction, before settling out.

So let’s take a look at the three stages that take place during a major-carry transition and how the DAC output responds, in Figure 3.

 

Figure 3: DAC output during transition

  1. The initial stage of the DAC, prior to the code transition. In this example we’re looking at the 3 MSBs representing binary code 011.
  2. The DAC output enters a major-carry transition that causes, for a short period, all of the R-2R switches to be connected to ground.
  3. The DAC recovers following a small period of switch charge injection, and the output begins to settle out.

Comparing the output glitch from a major-carry transition versus a non-major-carry transition, illustrated in Figure 4, proves that switching synchronization is the major contributing factor.

X-axis scale is 200ns/div and the Y-axis scale is 50mV/div.

 

Figure 4: R-2R DAC output glitch

So far, we’ve looked at glitch in an R-2R DAC architecture to explain that switch synchronization is the major contributor. But when you look at the glitch-ing of a string DAC, it is a little different. By design, it taps into different points on a resistor string to produce the output voltage. Without multiple switching, the pulse amplitude is smaller, and often dominated, by digital feedthrough. A comparison of the same major-carry code transition of an R-2R DAC and string DAC topology is shown in Figure 5.

 

Figure 5: R-2R vs string DAC output glitch

Understanding why glitch-ing occurs can help you decide if your design can live with this short impulse. I’ll talk about some methods to help reduce glitch in the coming weeks.

And if you want to learn more about string and R2R DACs, be sure to check out these previous posts in our DAC Essentials series here on Analog Wire:

Thanks for reading; I promise my next post will be shorter. :-)


How to determine the power at output of modulator from DAC back-off level

$
0
0

Customers often ask how they can determine the power at the output of modulator from digital analog converter (DAC) back-off level. Sometimes this can be confusing as modulator gain is specified in terms of voltage gain, but the answer is very straight forward and we’ll discuss it in this post.

Figure 1: Transmitter chain line-up

A simple transmitter line-up with DAC cascaded to modulator through filters is shown in Figure 1 above.

Figure 2 below shows a simple modulator.

Calculation of power gain of a 50 ohms two ports device is easy as it is the difference of input and output powers in log scale. Whereas, in the case of modulators as the input and output impedance are different and inputs are I and Q differential the gain is specified in terms of voltage gain. Voltage gain of modulator is the ratio of RF output voltage to the input I or Q differential voltage as shown below.

Consider the TRF3705 low-noise high performance quadrature modulator. TRF3705 datasheet specifies -1.9dB voltage gain at 400MHz with high gain control (GC). Datasheet measurements were performed at differential 1 Vpp I or Q voltage. Lets calculate the output power at this input voltage.

TRF3705 modulator datasheet shows at 400MHz the output power is 2.1dBm.   For the DACs the full scale power is generally defined as the total power from I or Q differential signals into 50ohms load as shown in Figure 3 below.

The DAC348x family of high performance digital to analog converters seamlessly interfaces to the TRF3705 modulator. DAC348x full scale I or Q differential voltage is 1 Vpp and with 50 ohm resistor loaded to the differential output, the full scale output power is 4dBm.

Let’s determine the power at the output of TRF3705 modulator at 400MHz.  TRF3705 voltage gain at 400MHz is -1.9dB. Let’s assume the DAC DAC348x back-off 14dBFs and filter loss between DAC and modulator 1dB.

Using the above equation, the power at the output of TRF3705 at 400MHz will be -12.9dBm. Thus, the output power of the modulator can be determined by adding the specified voltage gain to the DAC full scale power and subtracting the DAC back-off and any filter losses.  

Analog and the new industrial revolution

$
0
0

Welcome back! If you haven’t had a chance to see my first post, “Engineering the world through analog,” I hope you’ll check it out.

In that post we talked about the technology developments going on even as we speak, with analog semiconductors being key enablers for many of them. Areas like automotive and transportation, communications, security and safety, and health technology are all reaping the benefits. In this post, I’d like to zoom in on some of the key innovations we’re seeing in the industrial market. 

Industrial innovations

Industrial automation is a segment with huge potential for advancements. So what trending areas are driving the current revolution? Manufacturing for one.   Companies are creating efficiencies by automating their manufacturing processes. For instance, a programmable robotic arm with advanced analog and embedded processing ICs can have a significant impact on a company’s success. Automation can help a company’s bottom line by improving productivity, decreasing operating costs and power consumption, and increasing reliability and accuracy. That robotic arm requires front-end sensor devices, operational amplifiers, microcontrollers and power management ICs, to name a few.

At TI, we know first-hand the benefits of gaining efficiency in our factory operations. The greatest sources of power consumption within TI wafer fabs are the engines that cool the water used in manufacturing processes. Compressor motors are the most inefficient part in those engines. So, to improve manufacturing efficiencies by consuming less power and creating less waste, TI now uses variable-drive motors built around analog and embedded processing ICs.

According to the International Energy Agency, if companies converted all motor drives to variable speed drives like we did at TI, companies could reduce global energy consumption by 9-14 percent!

Smart grid

One of today’s biggest game-changers is the smart grid. Technology advances are driving once-in-a-generation advancements in efficiency for electric utility companies. The advanced meters at the heart of the smart grid ecosystem deliver almost real-time power consumption data to utility companies and consumers. These revolutionary changes are based on huge amounts of data collected by integrated circuits in smart meters and delivered wirelessly to central databases. Check out this block diagram for the types of circuits in a typical smart e-meter:

The smart or connected home is advancing by leaps and bounds. This network of machine-to-machine communications is still in its infancy, but some very compelling applications are showing real promise. For example, users can monitor connected alarm systems through their smartphones, receiving notifications when something at home needs attention. Consumers can control their connected thermostats, which adapt to their surroundings from anywhere in the world using a mobile device.

We’ve been hearing for a while now about the capability to turn on or off your appliances remotely, receive alerts when power consumption increases/decreases, or use cameras to monitor activity on your handheld device. Well, we’re getting closer all the time. Technologies that currently connect users to their alarm systems and thermostats soon may connect them to sensor nodes throughout their homes. This means lights will automatically turn on and off, depending on where in the home you are, CO2 sensors will increase the velocity of your HVAC fan when levels get high, TVs in multiple rooms will sync channels as you move from one room to another, and appliances will have meters in them that tell you how much energy they consume.

While some of these applications are on the market today, many more are on the horizon. At the core of these innovations are analog and embedded processing technologies. These systems include low-power RF chips, sensor integrated circuits (ICs), analog front-ends (AFEs), microcontrollers, power management ICs, interfaces and other semiconductor devices.

What do you think about these technology advancements? How will these applications make a difference in your life at home, work and play? 

Check out my next post where we’ll focus on what’s driving automotive electronics. We’ll ask you which technology innovations you see as having the most impact in the next 10 years. I’m looking forward to hearing your thoughts. Until then, take a look around and you will see that analog is wherever you are!

How to determine if a CFA or a VFA is better for your design

$
0
0

My amplifier buddy, Xavier Ramus, recently wrote a great blog on current feedback amplifiers that I wanted to elaborate on and add a few more pros and cons.

What’s the basic difference between a current feedback amplifier (CFA) and a voltage feedback amplifier (VFA)?

Simply put, in VFA the positive node Vp tracks the negative node Vn by action of negative feedback. In a CFA, however, the tracking happens by design (as my professor used to say).

CFAs haven’t been around for as long as VFAs and are still less popular compared with their VFA counterparts, but they do offer great benefits when used in the right application.

Some key benefits of CFAs are their high bandwidth, extremely high slew rate and low distortion making them suitable for large transient interface including audio applications. Check out the LME49723 to appreciate the low noise and low distortion.  On the other hand, CFAs typically lack the precision of VFAs and can have large amounts of input bias currents which can result in a higher current noise density. They also have mismatched impedances at their inputs (low input impedance at the negative node) as a buffer is used internally between the inverting and non-inverting inputs.

To circumvent the lack of precision in CFAs, combine them with VFAs in a composite amplifier fashion.  Professor Sergio Franco discusses many of these topologies in chapter 8 of his book, “Design with Operational Amplifiers and Integrated Circuits”.

CFAs are an excellent choice for applications requiring fast current to voltage conversion- think transimpedance in photoconductive modes- driving cables in optical systems which require a high output current and even for active filter design. The OPA695 is a 12-V CFA with a noise floor less 3nV/rtHz and 90mA at its output.

A common mishap associated with CFAs is their use as integrators. Inserting a capacitor in the feedback loop can lead to serious instability. I recommend using a resistor in series with the inverting input leading up to the feedback capacitor. The value of the resistor, however, must be chosen carefully to avoid decreasing the bandwidth unnecessarily.

 

Clamp your amp!

$
0
0

Many analog systems must accommodate very large range of signal amplitudes with excellent fidelity, or low distortion.   At the same time, some signal chain components are damaged by signals that are too large.  One example is an analog to digital converter (ADC) input.  For a high performance ADC like the ADC16DV160 the absolute maximum input voltage on one of the Vin pins is 2.35-V.  Because the ADC is quite expensive and repairing damaged equipment is even more costly, it is very important to make sure that this voltage level is not exceeded.  This means that the preceding stages must be designed to never exceed this voltage. 

Here we’ll discuss how to protect the ADC.

For this example we will continue to use the ADC16DV160.  The ADC input common mode voltage is 1.15-V ± 0.05-V.  The maximum (linear) input signal is 2.4-Vpp differential.  A 2.4-Vpp differential signal is 1.2-Vpp on each input.  Under normal operating conditions each pin swings no lower than 1.15-V – 0.05-V-0.6-V = 0.5-V.   Likewise the upper swing limit is 1.15-V +0.05-V +0.6-V = 1.8-V.   The absolute maximum voltage of 2.35-V is only 0.55-V away from the normal operating range. 

For further perspective, the ADC16DV160 has an SFDR, or spurious free dynamic range, of 98dB with an input tone of -1DBFS.  In order to preserve this linearity, the ADC driver needs to have a P1dB point of 18dBm or higher.  With a 200 Ohm input load condition and 6dB matching loss, the ADC driver is capable of about 5-Vpp at the ADC input.  That means that the ADC driver is capable of producing a positive swing of 2.45-V, which is over the ADC absolute maximum voltage and creates the risk of damage to the ADC. 

One solution to this dilemma is to use an amplifier with an output clamp such as the LMH6553 because it has an output clamp with 40m-V accuracy and 600ps recovery time.  The LMH6553 output clamp is set to 2.1-V and still easily passes a full scale ADC signal.  With 40mV of accuracy, it can also clamp well before the absolute maximum voltage of 2.35-V. 

Below are a few graphs to help illustrate the input signals to the ADC.  The first graph shows a normal input signal, where the ADC is operating in the linear region.  One key note is that a differential signal is composed of two opposite phase single ended signals.  The graphs below show one of the single ended signals on the actual voltage presented to the ADC pins.  The graphs also include the input and output signals which are differential and mathematically centered on the graph even though there is a positive common mode voltage associated with each one. 

Normal input signal

Overdrive input:  Showing how ADC is protected by clamp action

Analog systems that require high speed, low distortion and voltage clamping can benefit from a clamping amplifier like the LMH6553.  By using a clamping amplifier it is possible to protect sensitive and expensive, high performance ADCs. 

Clamp your amp and let me know how it works for you!

DAC Essentials: Glitch-be-gone

$
0
0

In my last DAC Essentials post, I discussed the source of output glitching in precision digital-to-analog converters (DACs). These output pulses can disrupt system behavior when you’re expecting a linear transition as you step up codes. Let’s take a quick look at a glitch pulse from my previous post as a refresher: 

DAC output glitch is an "energy" defined by the width and height of the pulse (shown in green). Manipulating the shape of this glitch may be good enough – depending on the system requirements. Adding a simple RC filter after the DAC output attenuates the amplitude of the glitch but increases settling time. However, the glitch "energy" (area under the curve) remains the same. Below is an example of a DAC crossing a major carry transition showing the output before and after an RC filter.

You should choose the appropriate resistor and capacitor ratio for the RC filter by looking at the glitch period and selecting a cutoff point a decade or so prior. When it comes to picking the component values, you’ll want to keep the resistor value small to avoid a large voltage drop with a resistive load. From the resistor selection, the capacitor value can then be chosen from the desired RC ratio.

Another approach to reduce glitch is using a track and hold amplifier technique. This method is a bit more tedious, as it requires strict switch timing, along with external components leading to higher cost and more board space.

Using an external switch, a few passive components, and amplifier, you can remove the DAC output glitch completely and replace it with a small transient from the new S/H switch. This new short transient can then be attenuated with a first order low pass filter stage. A basic diagram is shown below. 

The system design structure is fairly straightforward. The switch is open as the DAC crosses over a major carry transition. This is where the glitch will take place. Once the voltage transition completes, the switch is closed, charging the CH sampling capacitor to the desired value. The capacitor continues to hold the new voltage as the external switch opens while the DAC updates its output again. This allows you to remove the glitch (in theory) without increasing the settling time. 

Here are the pros and cons of the two solutions I've discussed:

  • If the system can live with increased settling time and needs attenuation on the glitch impulse amplitude, a simple RC filter may do the trick.
  • If the system needs the glitch completely removed, a track & hold amplifier solution may work.

Of course, the other option is to steer clear of R-2R DACs and design with a string DAC solution to avoid large glitching altogether. Just know that doing so may force you to tradeoff other DAC specs.

If you’re new to the DAC Essentials series and found this post interesting, be sure to check out our previous posts:

Current sensing revisited

$
0
0

Back a couple of years ago, TI systems engineer, Jerry Steele, put out a great video called “Current Sensing: Low Side, High Side, and Zero Drift” on the different methods for current sensing. In the video, he discusses low-side current measurement – when the shunt resistor is between the load (or the supply) and ground. He explains that the positives of low-side sensing are the common-mode voltage being essentially 0V and that it is a very simple and straight forward method to measure current. The biggest drawback being that the load (or supply) is isolated from system ground by the shunt resistor (see Figure 1). This prevents the detection of load shorts to ground, which could lead to system damage. It also means that it is a single-ended measurement – more on this in a second.

Figure 1: Example of low-side current measurement

Jerry’s video also introduces high-side measurements where the shunt resistor is between the supply and the load (see Figure 2). The pros of this method include easily detecting opens or shorts, as the load is connected directly to system ground, and allows for directly monitoring the current from the source itself. The potential issues include the fact that the common-mode voltage of the amplifier is now essentially the same as the supply voltage, which in some systems is extremely high. This requires input matching on the device to keep the common-mode rejection error low. Also, since this is now a differential amplifier circuit, the circuitry is more complex than the single-ended measurement that low-side measuring can use.

Figure 2: Example of high-side current measurement

Since the video’s release, TI has introduced additional high-performing devices that are ideal for high-side sensing applications. Specifically, the INA282 family supports a common-mode range of -16V to +80V with an offset of 70μV. This is over a 95% reduction over the INA193 that Jerry references. Now think about the zero drift discussion as it relates to the accuracy of the measurement and the value of the shunt resistor (full-scale shunt voltage drop) required for meeting your accuracy goals. You can see how much more easily any device in the INA282 family enables you to meet your goals.

If you don’t need the high common-mode voltage range, then in addition to the analog output INA210 family, check out the INA226– it’s great for either low-side or high-side applications. It has industry leading offset voltage of 10μV. Combine that with the gain error of 0.1%, enabled by the input architecture and the digital output, and you can see that it provides industry leading accuracy performance regardless of the full-scale differential shunt voltage drop as shown in Figure 3.

Figure 3: Performance measurement of the INA226 bidirectional current/power monitor

If you want to read more on current measurement techniques, I highly recommend a four part series put together by our apps team and featured in EE Times:

Part 1: Fundamentals

Part 2: Devices

Part 3: Accuracy

Part 4: Layout and Troubleshooting Guidelines

Thanks for reading and hopefully this will help you solve your current measuring challenges!

How to determine power gain and voltage gain in RF systems

$
0
0

I’m hearing more and more customers ask ”how does gain through a signal chain change with different load impedances?"  And, "when does voltage gain and power gain coincide when measured in dB?” I wanted to share the answers with the Analog Wire audience in case any of you have the same quesitons. So, here we go…..

In a single-ended signal path with 50 Ohms termination, the gain calculations are very easy because the voltage gain (20* log (Vout/Vin)) is equal to the power gain (10* log (Pout/Pin)).  Things become a bit more complicated though when the impedance of the load, or source, changes.   For example, in many radio receiver channels the 50 Ohm single-ended signal is converted to a 200 Ohm differential signal before it it digitized with a high performance ADC, such as the ADC16DV160.

In addition, there are two main types of amplifiers, voltage output amplifiers like the LMH6521, and current output amplifiers like the LMH6515.   The calculations below show how these two different kinds of amplifiers react to different load conditions. 

Voltage output amplifiers are the most common amplifiers in non-RF systems and have been immortalized by the classic operational amplifier (op amp). Note that both current feedback and voltage feedback op amps have a voltage output architecture.   With recent developments in bipolar transistor technology, op amps and their derivatives are useful up to 2 GHz or higher. As a result,  they are finding their way into RF and IF signal paths.  Because an op amp has infinite input impedance and zero output impedance,  the power gain of an op amp is not normally specified, rather the gain is given as a voltage gain (Av).  A common gain setting is 6dB, where the output voltage is 2x the input voltage.  Note that this gain does not specify an input load condition or an output load condition.  Because a voltage alone is not sufficient to calculate power, power gain can not be calculated using only a voltage gain. 

Figure 1 ideal voltage amplifier

Figure 2 example voltage amplifier: LMH6521

Current output  amplifiers are another common type of RF amplifier beacause a given input signal a given output current is produced.  There are two common configurations in one configuration Iout = (in * Gain) in the other configuration Iout = (Vin * gain).  The latter is more common and in that case the gain is called transconductance (gm). In transconductance  amplifier calculations, both the voltage gain and power gain are dependent on load conditions. (Example Amplifier LMH6515 Rin = 200 Ω, Rout = 200 Ω or 400 Ω, Maximum  gain = 0.1 A/V)

Figure 3 ideal current amplifier

Figure 4 example current amplifier: LMH6515

For both amplifier topologies, voltage gain in dB and power gain in dB are only equal when the input and output impedances are the same.   With current amplifiers, however, both voltage gain and power gain will change with load conditions, while with voltage amplifiers only power gain changes with load. 

As an exercise for the reader, prove that a back terminated load cuts both voltage gain and power gain by 6dB and let me know how it goes! 


What is this "unlimited cap load" stuff anyway?

$
0
0

As analog engineers we’ve had the occasion, at one time or another, to build an amplifier or control loop, and we’ve all experienced the frustration of finding out that the circuit is unstable. The late Bob Pease had his share of such instances. He jokingly said once that when he builds an oscillator, it won’t oscillate, but on the other hand, every amplifier he built oscillates at first! Bob’s solution to designing a stable amplifier was to shout from the top of his lungs in his lab… so the circuit could hear him… “I’m building an oscillator” in hopes that it wouldn’t oscillate! By the way, those familiar with Bob’s writings will notice that I’ve borrowed his terminology for the title of this blog in his honor.

A capacitor is an inherently difficult load for an op amp as it creates additional phase shift around the loop because of the op amp’s finite output impedance which eats into the phase margin. So, an amplifier with a heavy capacitor load can especially leave you sleepless at night! Here are some examples of cap loads:

  • MOSFET Gate
  • TFT panel Vcom node
  • Un-terminated or not-well-terminated cables or transmission lines
  • The input of some ADC’s

TI’s prolific blogger, Bruce Trump, covers this topic extensively in The Signal and provides valuable insight and workarounds in Taming Oscillations—the capacitive load problem .

If you are like me and amplifier compensation and load isolation make you queasy, try op amps dubbed as “unlimited cap load stable.” They are designed to not oscillate with any cap load directly on their output at any closed loop gain! Table 1 below shows a quick reference guide for selecting the correct “unlimited cap load” device that fits your application:

Device

Operating Supply Range (V)

Gain-Band-

Width (MHz)

Short Circuit Current

(mA)

Channels per package

LM8261

2.5-30

21

53

1

LM8262

2.5-22

21

60

2

LM8272

2.5-24

15

130

2

LM7321

2.5-32

20

65

1

LM7322

2.5-32

20

65

2

LM7121

4.5-33

235

52

1

Table 1: Unlimited cap load op amps selection guide

These devices employ an internal compensation mechanism that takes advantage of the output transistor(s) miller capacitance variation with cap loading to vary the dominant pole, thereby compensating for the load. This is described in more detail within the article "Unlimited Capacitive Load Drive Op Amp Takes Guess Work out of Design". Suffice to say that one such device, the LM8272, has respectable phase margin for up to 1nF load and is stable (positive phase margin)  with up to 5-decades of capacitance, as shown in Figure 1 below.

Figure 1: LM8272's impressive phase margin for any CL

Now I’m not advocating that you all start hollering in your lab, like Bob used to do; it’d be chaos after all! But in the off-chance that you do and your circuit still misbehaves when driving a capacitive load, select a device from the list above and show your circuit who’s boss!

Analog drives automotive solutions

$
0
0

Welcome back once again! My last post honed in on some of the latest innovations in the industrial segment. Check it out here. In today’s post I want to take a closer look at the automotive industry where we are seeing very cool developments taking place.

 Infotainment systems

 Shifts in consumer expectations are among the mostdynamic changes in today’s automotive market. What do you look for when you are buying a new car? In previousgenerations young car buyers cared most about the speed andlook of their vehicles. Today, it’s a very different story. These buyers want to stay everybit as connected behind the wheel as they are elsewhere. Thisshift in mindset is driving automobile manufacturers to create complex infotainment systems that look and behave much like tablet computers we use every day. Check out this complete system block diagram that shows what’s involved with creating an infotainment system:

Customers are re-defining automotive infotainment using TI solutions, paving the way for an unparalleled in-vehicle experience with entertainment and telematics functionality.

Start-stop capability

Another clever automotive feature is the start-stop capability. Agasoline-powered engine automatically shuts off when it comes to a stop, and thenrestarts when the driver presses the accelerator. This innovation actually reducesfuel consumption by at least 10 percent. And with the ever-increasing cost of fuel, every penny counts! While it may sound simple, the technology behind it is quite complex. Assummer temperatures soar and we get stuck in traffic, we need sensors to tell the air-conditioningcompressors to keep blowing cold air on us. Given that the enginebelt is stopped, the compressor motors need to be electricallydriven. That capability requires FETs, motor-controlchips, microcontrollers and communication chips.

Active safety and advanced driver assistance

Imagine a world where cars automatically correct mistakes made by the driver, eliminating avoidable accidents. These are now becoming a reality! Active safety is a very exciting opportunity in the automotive segment. When we drive at highway speeds today, we trust that other drivers just a few meters away are competent behind the wheel. That’s not always the case. Imagine technology-based safety mechanisms that, for example, apply the brakes before an imminent collision. Those types of technologies are available in some cars today, and manufacturers will increasingly include more advanced safety features in the future.

Automotive vision control

Automotive vision control systems process digital information from sources like digital cameras, LIDAR (light distance and range), radar and other sensors to perform tasks like lane departure warning, blind spot detection or parking assistance. The processed information can be displayed on screens or announced via acoustical warning signals, or with haptic feedback such as a vibrating steering wheel. These systems include power supplies to regulate to voltages for digital signal processors (DSPs); microcontrollers to handle system control functions and communication with other modules in the car; and communication interfaces to exchange data between independent electronic modules in the car. Check out our video on how rear-view cameras are quickly becoming an integral part of driver-assistance systems

Car black box

We hear so much about the black boxes being used in airplanes and how invaluable is the data they collect. This same concept is now being applied to automobiles. Car black boxes, using digital video recorders, monitor and record activities in or outside the car in a panoramic fashion using its front, rear and optional side cameras. Videos can be stored on a local disk and viewed on the car’s display monitor, or streamed remotely using a wireless connection. The next time you find yourself saying it wasn’t your fault at the scene or in court, you could have the data to back up your story!

These are just some of the advancements being made available to car manufacturers as they roll their new designs. And many more are to come. If consumers want it, there’s a good chance it will become a reality!

We’ve covered several different technologies and will touch on health technology in my next post. Now it’s time to hear from you!

Of all these innovations, which of the following do you see as having the greatest impact over the next 10 years?

1.    Cars automatically correct mistakes made by the driver, eliminating avoidable accidents.

2.    Factory production lines continually operate because equipment diagnoses itself through smart sensors prior to a failure, avoiding costly downtime.

3.    Implanted medical devices continually charge themselves, avoiding replacement.

4.    Ability to monitor residential connected alarm systems through portable devices, receiving notifications when something at home needs attention.

5.    Agasoline-powered engine automatically shuts off when stopped, thenrestarts when the driver presses the accelerator, reducingfuel consumption.

6.     Can you think of other possibilities?

Please post your response in the comments section. Once we receive your input we’ll create a recap to share the community’s feedback on how you want engineering to shape the future.

What’s that? You say you haven’t had a chance yet to check out my previous posts! You can find them here:

Engineering the world through Analog

Analog and the new industrial revolution

How "CAN" you make safe circuits safer? Redundancy!

$
0
0

Any accident reported in the news makes me we wonder about safety – not just in planes, trains and automobiles but everywhere around us. What if the train ride I enjoyed had a problem with the sensor network that causes a malfunction? Or the elevator gets stuck due to some issue with the network? What if my manufacturing line or production robots fail catastrophically? In such times, I hope that the engineers designing these applications thought thoroughly about safety and redundancy. Economically, redundancy is also great practice. What if a system critical to your business fails? The lost output is financially important. 

Redundancy – having two or three of everything so that if one fails the others work – can add margin to functional safety of these networks and systems. You might ask, does redundancy mean a higher price because there are two of everything? I understand that we live in a world where everyone wants “everything for nothing,”give me all the features at the lowest price. But there are instances, especially when it affects my life or someone else’s that I am more than willing to pay for redundancy.

So how does this relate to CAN? CAN has its advantages with cost, error handling, prioritization and arbitration. Therefore, CAN networks are widely used in these and other applications for data communications. CAN serves as a fundamental networking technology.

Our team has taken the redundancy concept and applied it to a CAN transceiver (SN65HVD257) allowing for easy implementation of a redundant CAN physical layer (network). Take a look at Figure 1. Each microprocessor along with the CAN transceivers is one physically redundant node. Traditionally, you would have one CAN controller (microprocessor) and one CAN transceiver per node. But for redundant CAN networks we have two CAN transceivers and two bus connections per node. These transceivers and buses work in parallel, sending the same message on two separate buses. If one of the buses fails, the other is still functional. This system also detects which of the two buses failed and to which CAN state it failed: dominant or recessive, aiding in system debug and repair.

 Figure 1. Typical redundant physical layer CAN network using the SN65HVD257

As one of our industrial automation customers said “if our logistical system fails, we are dead in the water.” Through the use of redundant networks, we ensure continuity of process so there is no loss of revenue. When a network issue occurs, we don't have a line-down event because there is a second redundant network already in place.  Who wouldn't want some extra dough in their pockets especially in this economy?

So even if the safety reasons don’t motivate you to add redundancy to your systems, think of the financial reasons. And don’t forget to invite me for a drink to celebrate the success of your new redundant, efficient system. Cheers!

For more information:

-          SN65HVD257 product information

-          SN65HVD257 evaluation module

-          User’s guide

Turn up the heat: Packaging is essential to protect your parts

$
0
0

In my last blog post, Turn up the heat, we talked about high temperature flash memory and the need for electronics in very harsh environments. One of the biggest areas of concern for product designers is the functionality of the parts at extreme temperatures and finding the right packaging.  When temperatures reach 200°C plus you’ll want packaging options that can withstand the heat to protect your parts!

There are several reliability concerns with plastic packages at very high temperatures, including:  

  • Kirkendall Voiding - A standard plastic package typically uses gold bond wire to connect to the aluminum bond pads of the die.  When exposed to higher temperatures, the diffusion of the Al pad into the gold ball bond causes voids in the ball weld that gets worse with time. Eventually, the weak bond can lift. The intermetallic diffusion is also called Kirkendall voiding and the increased electrical resistance and weakened bond interface can lead to failure of the device.
  • Material Decomposition - A standard mold compound can also weaken over time at higher temperatures than specified and decompose leading to corrosion and bond wire damage.
  • Delamination - Delamination is also a problem when there is increased thermo-mechanical stress leading to bonds being lifted.  Lead and Die attach delamination can also happen with standard commercial materials at extreme temperatures or with many thermal cycles.

Figure 1. Cross section of Au ball bond with intermetallic voiding

Many standard plastic encapsulated parts are qualified up to 125°C and are used for a wide variety of industrial and automotive applications. But, when exposed to temperatures higher than 125°C these standard packages may not be suitable for reliable operation and other package options need to be evaluated.

Below are some extended temperature packaging solutions to consider for high temperature plastic, ceramic, and bare die packaging:

  • HT Plastic: -55°C to 150-175°C/. These parts use new material sets specific for high temperature operation including leadframe, mold compound, die attach, and bond wire.  TI uses special optimized bond processes to help eliminate the intermetallic issue on bond pads.
  • Ceramic: -55°C to 210°C.These devices use the most rugged, hermetic ceramic packaging to support temperatures above 200°C.  The use of aluminum wire eliminates many concerns of the gold wire bonding.
  • Known Good Die (KGD): -55°C to 210°C.KGD provide the smallest form factor of fully tested and qualified devices for applications that need to integrate into multi-chip modules or hybrids.

The typical process to build high temperature electronics requires you to purchase industrial, automotive, and military-specific temperature parts. Then you have to extensively qualify for temperatures above 125°C, and finally screen every device for use in the system. This is very costly and a time consuming process.  If you’re designing for high temperatures, there are a number of catalog parts tested and guaranteed to perform at temperatures higher than 150°C. Below are a few I think are worth highlighting that may help in your design. Depending on your application and requirements for ruggedizing your end equipment, there are also now a number of semiconductor packaging options from TI that can support your specific requirements.

Signal conditioning

Data converters

  • ADS1278-HT: Octal, 24-bit analog-to-digital converter
  • ADS8509-HT: 16-bit 250kHz CMOS analog-to-digital converter with serial interface

Interface

Power

Microcontrollers / Processors

If you need even more options or want info on high temperature devices and packaging come visit us here.

High-speed design: Why phase noise matters!

$
0
0

I often find myself working on problems related to noise. It is the enemy of most analog systems and often difficult to control. It’s everywhere in our environment and systemic to our electronics. Thus the rub: noise is already present at some level, so you need to be clever in how you deal with it. I’ve worked on various communications systems and have seen firsthand how noise can degrade system performance, so here’s a bit on that topic.

Communications equipment designers continuously struggle to improve the bit error rate (BER) of their systems – both wired and wireless. If we consult Shannon, you immediately see that the capacity of a channel to carry information error-free lies in the SNR as well as the available bandwidth. If you can’t improve the bandwidth, you must deal with improving the SNR. If you can’t improve signal level, then the last thing you have control (or lack there) of is noise.

Noise shows up in various ways – some is deterministic like the conducted radiation from a switching power supply. But most is non-deterministic and broadband limiting your ability to filter it out.  So, you need to be careful when selecting components as not to add additional noise or jitter (a form of noise) into your system. Contributing components can be found all along the receive chain. Low noise amplifiers, mixers, synthesizers, buffers and data converters all contribute to the degradation of the SNR - let me illustrate…

 

 Figure 1: – Typical high performance wireless analog front end

The system block diagram in Figure 1 shows a typical high performance wireless transceiver front end with mixers to down-convert the band of interest and digitize it, as well as DACs and quadrature modulators to up convert for transmission. Noise contributions at the front end mixer stage can produce products that are present in the first Nyquist zone – and thus are directly in the band of interest.

To combat this, selecting a high performance frequency synthesizer such as the new LMX2581 can both improve SNR as well as diminish spurs improving SFDR. This particular synthesizer can provide an extremely wide tuning range (50 MHz to over 3.7 GHz) through its flexible architecture, but the LMX2581 also has a very high performance phase detector which can assist designers in avoiding integer boundary spurs as well as provide improved overall phase noise performance.

The remaining components in the receive chain are the VGA and the ADC. If we take a quick look at the ADC, the noise contribution is a function of quantization noise as well as the input stage, sample clock, and aperture jitter. The sample clock is especially problematic since it may contain excessive jitter as well as spurs, which can degrade ENOB and SFDR performance. Selecting a good jitter cleaner, such as those from the LMK04800 family, is critical to overall good noise performance.

I hope this provides some insight to the trouble with noise in your next communications design! …till next time.

Wanted: Stable oscillator

$
0
0

When dealing with high speed amplifiers, it’s common to have unwanted oscillations because of parasitic or loop gain issues.  It’s possible to predict the frequency range of these oscillations, but not to target a specific frequency.  So how do we create an oscillator with a specific frequency?

There are various approaches.  Many oscillator circuits are based on transistors, but some are adapted to use operational amplifiers.  Here we will make use of an Operational Transconductance Amplifier (OTA) to create a linear oscillator.  The transconductance is the conversion of voltage into current expressed in mA/V or S (Siemens). More information on OTA can be found in the OPA861 product datasheet or in an application note I wrote titled “Demystifying the Operational Transconductance Amplifier.” 

One simple way to consider an OTA is to look at it as a self-biased bidirectional transistor with three terminals: a B-input, an E-input/output and a C-output.  The nomenclature used here emphasizes the resemblance to a transistor.  The B input has the same function as the base of a bipolar transistor, E is the emitter and C is the collector.  The E-input/output is used as either an input or an output depending on the circuit configuration.

Consequently, the B-input is high impedance while the E-input is low impedance with an output impedance of  with gm the transconductance gain in mA/V, and finally the C-output is high impedance.

Figure 1: Parallel LC oscillator

The oscillating frequency is set by   .

RC will set the Q factor, or how wide the spread is around the resonant frequency, while RE will set the gain.  Note that the gain resistor is the sum of the internal impedance of the E-input with the external gain resistance.

In the same manner, don’t forget to take into consideration the parasitic capacitance at the C-output, the B-input and the buffer input nodes when calculating the resonance frequency and to select Cosc as to be the dominant term.

The circuit developed here was implemented using the OPA860 which combines both a high speed OTA and a closed-loop buffer.  To achieve the results shown in figure 2 below, we selected the following components:

RC = 100 (5%)

RE = 24 (5%)

Losc = 12nH (5%)

Cosc = 1nF (X7R ceramic = ±15%)

Due to component tolerances, we expect the oscillation to be between ~41.8MHz and ~51.6MHz.  We measure 43.1MHz for room temperature in accordance with component tolerances.

Figure 2: Resonance frequency variation over temperature

For the plot above, the entire PCB was inserted in the oven, drifting all components together.  The overall center frequency variation is coming from the independent LC elements.  The OTA transconductance gain will vary as well, but as it varies with temperature, the gain will change.  If the gain becomes insufficient, the oscillation will stop.

Improvement would have to be made to minimize the resonant frequency temperature dependency, possibly using calibration. This circuit, if used at room temperature, can be used to measure the capacitance or inductance variation of a system by monitoring the oscillation.  As the capacitance or the inductance of the system varies, the resonant frequency will change providing a relative measurement of the varying element.

Check out my blog post titled “High-gain, high-bandwidth: why is this circuit oscillating” if you’re looking for more info on what to do if you have an oscillation in your design.

And, if you’d like even more reading material, I encourage you to check out my other posts as well.

High-gain, high-bandwidth… how can I get it all?

High-gain, high-bandwidth....putting it all together

Correcting DC errors in high-speed amplifier circuits

Reducing amplifier power consumption for SAR ADC drivers

Current feedback amplifier...how do I make it work for me?

This amplifier doesn't exist...now what!?

This amplifier doesn't exist...now what?! - Part 2

The need for speed: Turbo-charged CAN

$
0
0

WTHAYTA? What the heck are you talking about? Do you feel that you are always surrounded by people using acronyms and you do not know what they mean? With multiple definitions for the same acronym, you can’t just use HD for half duplex or FD for full duplex. HD nowadays means High Definition, and just recently FD got rechristened to mean Flexible Data-rate. Have you heard of that? If not, stick with me.

Controller Area Network (CAN), which has been a work horse communication standard for many years, generated new buzz over the last year with a new CAN with flexible data-rate (FD) standard. This new CAN FD proposal was created to help free up network bandwidth and more fully utilize the network. It solves two limitations in CAN networks today.

The first shortcoming that limits CAN communication today is the amount of overhead that comes with every message, or frame as the CAN standard refers to it. An easy way to look at this is to compare the number of bits of data that can be sent in one frame versus the total number of bits that need to be sent as overhead. For example, CAN limits the number of bytes that can be sent in the data field to eight bytes of data, which is equivalent to 64 bits. A CAN frame with an 11-bit identifier field will have a total 111 bits in each frame, not including stuff bits which to simplify this blog will not be taken into account. That means that 47 bits, or 42.3 percent of the message, is overhead!

Moving to CAN FD, the data field is extended to allow up to 64 bytes of data, or 512 bits, in one CAN message. Taking the previous example where an 11-bit identifier was used, there are now 512 bits of data in a 568-bit frame. This means that the overhead is only 56 bits, or 9.9 percent of the entire message! This in itself can make a huge difference in freeing up network bandwidth. Imagine being able to say eight times as much information in one breath!

Can full duplex

The second thing that limits CAN communication is the speed, or data rate, at which the information is sent over the bus. This is where the flexible data-rate part of the title comes to play. One of the benefits of CAN is that when multiple nodes try to access the bus at the same time a non-destructive bit-wise arbitration takes place at the beginning of the frame. To ensure that nodes on opposite ends of the bus can properly arbitrate for bus access, the speed of communication is limited by the two-way loop time of the bus. With a propagation delay of roughly five nanoseconds per meter on a twisted pair bus of 24 AWG wire, this delay starts to really reduce the maximum communication speed with longer buses.

CAN FD proposes having two data rates throughout the CAN frame. During the beginning of the frame, when arbitration is occurring and multiple nodes can access the bus, there is a slower data rate. Once arbitration concludes and only one transceiver is accessing the bus, the two-way loop times no longer need to be met and communication can be switched to a faster data rate. By doubling or even quadrupling the data rate that the data field is sent, the length of time it takes to send the total frame decreases by more than 65 percent. Remember back-in-the-day when you had a dial-up modem connection and your webpages used to load line-by-line? Then one day, you switched to broadband and your website loaded almost instantly. Remember how exuberant you were?  That is the CAN FD impact.

Figure 1 shows the benefit of increasing the data rate and/or the size of the payload. Check out TI’s flexible data ready SN65HVD255 and SN65HVD256 turbo CAN transceivers. Now you know WTHIATA!


SINAD, ENOB and the rest of the family

$
0
0

My wife, who’s not an engineer, recently heard me talk about SINAD and ENOB. After sometime she asked “who are SINAD and ENOB?” She was confused because I use these terms so loosely, a common misstep among us engineers. We use them so loosely, in fact, that sometimes they’re used in the wrong context. Let me explain what I mean:

When I hear ENOB, or effective number of bits, I need to know whether it’s pertaining to the analog to digital converter ( ADC), which is typically given in the product’s datasheet, or the system ENOB meaning that of the ADC,  the amplifier, the passive components, the voltage reference (if any), the power supply and anything which generates noise.

In the case of delta sigma converters, ENOB is usually expressed as a function of output data rate and gain. Sometimes the order of the digital filter is also indicated and the results for ENOB are tabulated. So when the question is asked “what is the ENOB of your ADC?” it really should be followed by “at x frequency and at a specific gain”.  Some recent delta sigma converters integrate EMI filters with the programmable gain amplifier (PGA) to reject inadvertent noise injection. These are low pass filters whose thermal noise is also accounted for. An example of such device is the ADS1220. For SAR ADC’s, ENOB is typically expressed as a function of input frequency and the voltage reference, just as spectral noise is measured versus frequency for op amps.  ENOB is written as (SINAD-1.76)/6.02

SINAD is the ratio of the signal to noise plus any harmonics and is used to evaluate the dynamic performance of the ADC. SINAD is usually plotted over input frequency spectrum at a particular voltage from the reference and the supply.

Effective resolution on the other hand is from a DC perspective. It is written as ln(FSR/RMS noise)/ln2 , where FSR is the full scale range. To get the noise free resolution, use the peak-to-peak noise value instead of the root means square (RMS).

If you want a thorough computation of the noise, you should use the total noise in root sum square (RSS) fashion.

This is especially helpful when using high resolution converters like the ADS8881 with an amplifier to drive them. The result will give you a good indication whether the op amp you selected is good enough, noise wise, to maintain an effective resolution. After all, the last thing you want to do is to throw away bits you’ve already paid for!

 

A closer look at special function amplifiers

$
0
0

Have you ever wondered why there are so many variations when it comes to special function amplifiers? Here, we’ll cover some of those variations and how to best use them in your designs.

Special function amplifiers can be programmable, but they don’t have to be. Instrumentation amplifiers (in amps or INA), programmable gain amplifiers (PGA), variable gain amplifiers (VGA), and digital variable gain amplifiers (DVGA) all fall in this category.

Traditionally, monolithic in amps  are designed using two and three op amp topologies along with high common mode rejection (thanks to well matched on-chip resistors), and high impedance usually at a lower cost than their discrete counterparts.

The two op amp option offers the advantage of using one fewer amplifier at the expense of a poorer common mode rejection due to the additional phase of the first op amp, or propagation delay. More conventional designs use a three op amp topology which provides better balancing and matching, as well as a higher common mode.

Regardless of their topology, in amps have one thing in common….differential inputs and single ended output with a reference pin that is connected to ground for dual supplies or biased to mid supply for a single supply operation. 

Other topologies are also used depending on the target application. Switched capacitors are used with chopper stabilized in amps for applications with low impedance sensors such as magnetic coils and strain gauges.  

Just like op amps, in amps can be designed with current mode or current feedback. The common mode is independent of resistor matching and the in amp can sense ground. The LMP8358 uses this technique, along with auto zeroing, to provide superior performance and on chip fault detection.

A major benefit of current mode topologies is the ground sensing capability. Rail to rail is also implemented by charge pumps at the expense of additional noise and an increase in die size.

PGAs differ from in amps in that they have a differential output. They don’t usually require an external gain setting resistor and in some cases they are used at gains lower than one. The attenuation feature is very useful in applications like structural testing a vibration control or universal IO modules for automation. These make great building blocks for integrated solution and they’re often part of high resolution analog to digital converter (ADC) as in the case of the ADS1220.

The latest PGA from the zero drift family is the PGA281. It has an input offset voltage of 5uV and a gain drift of 0.5ppm/ºC with EMI rejection of up to 120dB and a CMRR of 140dB! As for VGAs and DVGAs, they’re generally grouped into two types, constant output and constant input. In VGAs with constant output, the gain is adjusted to keep the output signal at a constant level while the input signal varies. In VGAs with constant input, the gain is adjusted to change the output signal while the input signal level is constant.

VGAs tend to be useful in applications requiring higher speed like sonar and photomultiplier control in industrial space, automatic gain control (AGC) in receivers for base stations, collision avoidance in automotive, video broadcasting  and ultrasound. And just like in amps, there are several topologies that can be implemented depending on the target applications.  The LMH6521 is just one of many DVGAs from TI that offer superior gain and phase matching.

The last type of special function amplifier is a complete analog front end (AFE) which combines gain and attenuation and provides users with the flexibility of single or differential inputs and outputs. The LMP7312 is an example of such AFE.

If you’re interested in learning more about special function amplifiers, specifically the basic difference between a current feedback amplifier (CFA) and a voltage feedback amplifier (VFA) check out my blog post called how to determine if a CFA or a VFA is better for your design.

What you should know about Thunderbolt™

$
0
0

Jumping onto the PC connectivity scene recently, Thunderbolt™ technology has emerged as the fastest way to transfer data between a PC and peripheral and display devices.  Developed by Intel with collaboration from Apple, Thunderbolt technology has emerged as an I/O standard with flexibility, performance and simplicity.

At 10 Gbps full-duplex bandwidth per channel, Thunderbolt technology merges high-speed data (PCI Express) and display (DisplayPort) onto a single protocol. Each Thunderbolt cable has two separate 10Gbps channels, allowing easy daisy chaining of multiple peripheral devices, up to seven deep. Dual channels allow more than a single device in the daisy chain to achieve the high bandwidth enabled by the Thunderbolt protocol. With the recent announcement of Thunderbolt 2, the dual 10 Gbps channels can be combined into a single data channel to get 20 Gbps full-duplex bandwidth.  To achieve such high-speed data transfers, the host (PC), cable and peripheral device must each maintain high levels of signal quality.  To this end, even the cables contain active circuitry supporting clean data transfer.

At 10 Gbps per channel, Thunderbolt technology enables[1]

  • Transferring a full-length HD movie in less than 30 seconds
  • Streaming an HD movie directly from external storage
  • Backing-up one year of continuous MP3 playback in just over 10 minutes

Not only does Thunderbolt provide high-speed data transfer, but it also supplies 10W of power to a peripheral device.  This power capability enables many new peripheral devices to be powered directly from the cable without the need for external power sources…true single-cable solutions for a wider range of applications.

To enable Thunderbolt, there are many considerations to keep in mind:

  • High-speed muxing of PCIe and DisplayPort data
  • Thunderbolt system control via microcontrollers
  • Data retiming in the cable for maintaining signal integrity
  • Power sequencing to the Thunderbolt controllers and support circuitry
  • Power delivery from host to the cable and peripheral device
  • Auto power delivery and high-voltage blocking to the sensitive cable circuitry
  • Receiving and managing power in the peripheral device
  • Safety certifications

Some of these may not pertain to the specific system you are designing, but many will.  The Thunderbolt specification details the proper hand shaking, power sequencing, and data transfers necessary to establish the Thunderbolt link and handle the necessary data transfers and power delivery.

You can easily navigate these system considerations by relying on TI’s broad product portfolio of Thunderbolt specific devices, each a perfect fit for Thunderbolt 1 and Thunderbolt 2 systems.  Check out TI’s complete Thunderbolt ecosystem solution and visit the TI E2E Community to get expert input from TI’s many Thunderbolt experts.

Visit us at TI’s exhibit in the Thunderbolt™ Community at IDF13 in the Moscone West Convention Center, San Francisco, California taking place September 10-12.



[1] thunderbolttechnology.net

DAC Essentials: Understanding your DAC's speed limit

$
0
0

In the last two “DAC Essentials” posts, Tony Calabria introduced a digital-to-analog converter (DAC) dynamic specification called glitch and discussed common techniques to “de-glitch” your DAC.

Today, we’ll look at two related dynamic specifications – slew rate and settling time. To learn more about how static and dynamic specifications differ, refer to this post.

What is slew rate?

Retired TIer, and analog guru, Bruce Trump may have summed up slew rate best in one of his final blog posts on The Signal, when he described it as an op amp’s speed limit. DAC slew rate specs match 1:1 with op amp slew rate specs.

Basically, when a sufficiently large change in the input voltage occurs, like when a new DAC code is latched that is several codes away from the current code, the output amplifier will begin to slew, or increase the output voltage as quickly as it can. It does this until it gets close to the intended value, and the output begins to settle within a specified error-band.

 The datasheet specification tells you the maximum rate of change you can expect to see at the output of the DAC when it is slewing, typically in microvolts per second.

Note:This figure isn’t drawn to scale for any real device; it is exaggerated to show each region

 What is settling time?

DAC settling time also bears some striking similarities to op amp settling time. The chief difference, though, is that DAC settling time also includes a figure called dead time. Dead time is the time the DAC spends latching, or updating, its output. This latching action is typically triggered by the falling edge of a digital signal, called LDAC. The LDAC and DAC output interaction is illustrated in the figure below, taken from the DAC8568 datasheet.

If a large input step occurs, the DAC will enter the slew region, which appears in both of the figures above. In the slew region, the DAC’s progress is limited by the slew-rate specification. If the DAC does have to slew, the next phase of setting time will be an overload recovery condition, followed by linear settling time into a specified error band. This error band is typically specified within 1 LSB for the DAC.

The datasheet specification for settling time will be given for a relatively large step-size. The DAC8568, for example, specifies settling time as 5us typical for a change from ¼ full-scale output to ¾ full-scale output.

Keep in mind that slew time can dominate your overall settling time figure, so if your output step-size is smaller than the step-size for the settling time spec in the datasheet, it will take less time for your system to settle. In most high-accuracy applications, settling time is the effective update rate for the DAC.

My next post will be about total unadjusted error, or TUE, a handy way of succinctly describing DAC accuracy. That post will conclude the “DAC Essentials” series on Analog Wire. But Tony and I are not going away. You’ll find us contributing to the TI Precision Designs Hub(“The Hub”),a new blog from Texas Instruments providing precision analog tips, tricks and techniques – from how to read data sheet specs and test conditions to how to optimize the external reference for analog-to-digital (ADC) performance.

An added bonus this week: be sure to check out a new “Engineer It” video about multiplying DACs (MDACs) from my colleague Rahul Prakash.

As always, leave your comments below if you’d like to hear more about anything mentioned in this post, or if there is something you would like to see included in a future post here or on The Hub.

High-current pulsed-source needed

$
0
0

I’ve spent a lot of time in labs over the past fifteen years or so evaluating devices and coming up with new measurement methodologies and approaches.  One scenario that I haven’t come across is a source that allows me to generate a current pulse. 

In some of my previous posts I’ve written about current feedback amplifiers and their uses, and using transconductance amplifiers to develop an oscillator.  And, my fellow amplifier cohort, Soufiane Bendaoud, also recently elaborated on my current feedback amplifier post.

This post is not too different in that I am going to stay with current mode amplifiers, another transconductance amplifier, and apply it to developing a high-output current, current pulse source.

For this experiment I’ll use the little known OPA615 amplifier.  If you check out the datasheet, you’ll see it was initially developed as a DC-restore function for the analog video function that was integrated years ago into a package that was more power efficient with a smaller footprint.  The OPA615 is interesting as it has two transconductance amplifiers and one integrated switch.  The combination of these three elements together allows for a very flexible device, capable of doing ns pulse integrator as well as a sample and hold function.  The switch is also fast with a control delay time of 2.5ns. See the OPA615 block diagram in figure 1.

Figure 1: OPA615 block diagram

As you can see in figure 1, the first transconductance amplifier is a comparator, essentially a differential pair input followed by a switch.  Note that this comparator output is a current source.  The comparator and the switch form a sampling comparator (SC) that is of interest here.  The operational transconductance amplifier (OTA) block is ignored here.

The important specs here are the 350MHz bandwidth and a ±20mA output current capability for the SC block and the 2.5ns control propagation delay time for the switch.  To increase the output current we are going to use two current mirrors one after another to provide the desired current amplification, one current mirror uses NPN transistors and the other uses PNP transistor as shown in figure 2.

Figure 2: Pulsed current source block diagram

Although the output of the SC is bipolar, we developed a unipolar output to quickly evaluate the feasibility and performance of this source.

An array of transistors is used for the current mirror implementation.  We initially wanted to achieve more than 200mA.  Due to thermal limitation of the package, three quad transistor arrays are used for each current mirror for a total of twelve transistors available.  Hence the current mirror ratio of 1:11.  To maintain the same current density in each transistor and avoid local overheating, see figure 3.

Figure 3: NPN current mirror implementation

Figure 4 below demonstrates the pulse response for a 1.8A 500ns current pulse and a 200mA 5ms current pulse.

Figure 4: 1.8A 500ns current pulse (Top), 200mA 5ms current pulse (Bottom)

To avoid voltage compliance limitations from the current source due to the loading, the transistors were selected to handle 60 V and the power supply is adjusted independently from the ±5 V power supply.  In the plot above, +20 V is used for the current source and ±5 V for the supply of the OPA615.

Viewing all 585 articles
Browse latest View live