Quantcast
Viewing all 204 articles
Browse latest View live

EMC Project Management and COTS

By Tim Williams

Introduction

One feature of the electronics industry that has become apparent, over the years of advising companies on EMC aspects of their designs, is the difficulty that many project managers seem to have in anticipating the problems that EMC requirements will cause to their project. Although it’s a universal feature – and you could say that an EMC consultant is going to bump into it fairly often, given his vocation – it seems to be especially prevalent in industries where large projects with detailed customer specifications are the norm. In such cases the requirements flow down from prime contractor to sub-contractor and further, and in so many cases, the understanding of what the requirements mean doesn’t flow down in parallel.

Many industry sectors suffer from this myopia: railway, automotive, telecom and aerospace are all affected. But the one which seems to suffer the most is the military sector. It has perhaps become more obvious in recent years, because of the squeeze on military spending, the expectation that the military should have access to the latest technology in the shortest timescale, and consequently the need to adapt commercial products for military use at the lowest cost – commercial-off-the-shelf (COTS).

In the EMC context, it works like this: the designer sees a functional requirement at the definition stage, and sees that it can be met by a commercially-available product. Along with the functional requirement, there is a whole stack of environmental requirements: such as shock and vibration, temperature and consequent heat dissipation, ingress protection, and of course EMC. The mechanical stuff is understood and can mostly be coped with, but EMC is a foreign country. They do things differently over there. As a result, the EMC requirements get pushed to one side in the early stages of project definition, with the expectation that they are simply an “engineering problem” that can be solved at a later date, if at all. There is a pious hope that when it comes to the scheduled EMC compliance test, the product will sail through without difficulty – or at least, any difficulties that arise can be offloaded onto someone else’s part of the system, or negotiated for a waiver, or at the most, dealt with by throwing in a few ferrites or filters, as one might apply a sticking plaster to a scratch.

The EMC consultant tends to get a phone call with an undercurrent of desperation when the EMC test has gone wrong and no such sticking plaster is obviously on the table. And this is, of course, the worst time to be involved in the project. Usually, the sales negotiators have bargained away any room for manoeuvre with the client. The most obvious engineering approach might be to find a sensible compromise between, for instance, discovered emissions that are “over the limit” and any expected susceptible frequencies in the eventual installation. (This is one implication of the “EMC Assessment” provision of the EMC Directive – one that’s never used, since product developers will go the extra mile, sometimes taking a company to the edge, to get that magic word “Pass” on the test certificate.)

Some standards – the US MIL-STD-461 is one of them – allow limits and levels to be tailored. Some users might want to “tailor” by reducing the severity of the requirement to meet the test result, which in some circumstances might actually be acceptable. But this is where the lack of understanding along the whole contractual chain is most pernicious. If a sub-contractor has signed up to meeting “the spec”, he can’t then renege on that, whatever the engineering practicalities, without substantial commercial penalties. And no project manager is going to put their career on the line for that.

COTS

A lot of the issues arise because commercial products – IT equipment, power supplies, instrumentation and so on – are pressed into service against EMC requirements they were never designed to meet. Or, as MIL-STD-461F puts it,

The use of commercial items presents a dilemma between the need for EMI control with appropriate design measures implemented and the desire to take advantage of existing designs which may exhibit undesirable EMI characteristics.

The use of commercial items presents a dilemma between the need for EMI control with appropriate design measures implemented and the desire to take advantage of existing designs which may exhibit undesirable EMI characteristics.

There has been considerable effort over the last few years to try and devise a way forward in such a situation. This has resulted in the concept of a “gap analysis”, in which the commercial specifications which a product is said to meet – usually under the CE Marking regime – are compared with the more stringent project specifications such as the military standards or the railway standards, and the identified “gaps” are filled by extra tests which may show the need for “mitigation measures”. Such a process has been described in for instance Cenelec TR 50538:2010, which works in the other direction – i.e. applying gap analysis to military equipment to prove that it meets the EMC Directive.

This process, while initially attractive, can fall at either of two hurdles:

  • The product in question, despite its promises, doesn’t actually meet the detail of the specifications that it claims, or else it simply doesn’t claim enough of such detail to be useful. The CE Marking regime is so inherently lax that it would be surprising if it were otherwise;
  • The “mitigation measures” which turn out to be necessary make the product unuseable in its intended application. For instance, the required extra filtering might double its size and weight, or the extra shielding might mean no-one could open the door to reach the front panel.

It may also be that the gap analysis simply isn’t able to identify all the gaps, which only eventually show up once the compliance test is done. Consequent delays to the project make it late and over budget, and the company ends up with the unhappily familiar project manager shuffle.

You might expect that companies which specialise in such projects would have learnt long ago of the dangers of postponing an analysis of EMC requirements, and indeed there are many such organizations that have EMC experts in house who can flag issues at an early stage. But there are also many who don’t, and even in the best organizations the in-house EMC specialist doesn’t always get the chance to offer the analysis that is required.

It doesn’t have to be like this. Some degree of early understanding of what the stringent specifications mean can save a lot of delay at the back end. Here are a few thoughts which come from a generalised fund of experience. Most of the issues arise from application of military standards to commercial products, so that will be the focus here: not to say that other areas don’t have their own issues, and some of these will also be mentioned. The two main military EMC standards are MIL-STD-461F in the US and DEF STAN 59-411 in the UK, and this article will look at their most common test requirements.

Power supply conducted emissions

Before we even get into high frequency issues, a lot of headaches arise at the low frequency end, particularly for AC power supplies. For large cabinet-mounted equipment, as in naval systems, it is common to try to use commercial power supplies, which have met EMC requirements according to the CE Marking standards. But military standards have a number of not-always-obvious requirements which conspire to trip the inexperienced.

Supply harmonics: CE101

The first conducted emissions standard you come to is, in fact, mainly a limitation on supply harmonic currents: MIL-STD-461F CE101.

Image may be NSFW.
Clik here to view.
Fig1

There are different requirements for aircraft and for naval applications. The above graph shows the CE101 limits for surface ships and submarines, alongside the UK DEF STAN limit for sea service, and the commercial Class A limits in EN 61000-3-2. The trick to understanding this is to see that different limits apply for equipment rated above and below 1kVA: and it’s counter-intuitive, in that more relaxed limits apply to the lower power rating. CE101 starts at the 60Hz fundamental and the harmonic limits are related to this value for equipment that takes more than 1A fundamental current, that is they become more relaxed for higher currents; but once your equipment takes more than 1kVA, suddenly the lower levels (adjusted upwards for actual fundamental current) kick in. For most electronic power supplies, this will mandate power factor correction.

Such a requirement is not unknown for commercial power supplies, and for European requirements at 50Hz, EN 61000-3-2 applies. But it doesn’t apply to 115V or 440V 60Hz supplies; and perversely, neither does it apply a limit to “professional equipment with a total rated power greater than 1kW”. And as can be seen from the graph, the 61000-3-2 limits are higher than the CE101 1A ≤ 1kVA limits.

The upshot of this is that if you know that CE101 will apply, make sure that the combination of all your specified power supplies can meet it – probably this will mean power factor correction on most of them. Don’t expect that a late change to do this will be easy – while it’s theoretically possible to apply a series choke at the input, any such choke will probably be bigger than the power supply itself.

Finally, notice that DEF STAN 59-411’s equivalent DCE01 requirement is much more relaxed, although as pointed out later, it uses a different LISN. But there are other military requirements which describe harmonic limitations, DEF STAN 61-5 and STANAG 1008 among them.

Supply harmonic limitations are related just to the AC supply frequency, and CE101 only extends up to 10kHz, though it will catch any other audio-frequency modulation or intermodulation effects on the supply. But then we get into the effects of the switchmode operating frequency itself.

Conducted RF on the supply: CE102, DCE01

Image may be NSFW.
Clik here to view.
Fig2

Looking at the above graph, which compares the MIL-STD-461F CE102 basic curve (28V) requirement with the commercial CISPR Class A limit, firstly it’s clear that the CISPR limit is less stringent. In fact the MIL limit can be relaxed for higher supply voltages, and many supplies actually meet the CISPR Class B limit, which is around 10dB tighter; but on the other hand the CISPR limit applies the quasi-peak detector, which relaxes the measurement by contrast to the MIL’s peak detector. So a direct comparison is somewhat more complicated.

But the real issue is the extension of the MIL STD down to 10kHz. This is potentially devastating for the higher power commercial switchmode supplies whose switching frequencies are in the 20-150kHz range. Low power units generally have switching frequencies above 150kHz so that both the fundamental and its harmonics are controlled by the CISPR curve. There is normally no content below the fundamental, unless the emissions from several supplies with different frequencies are intermodulating. But commercial supplies with higher power and lower frequencies will not have applied filtering below 150kHz, because for their intended applications they don’t have to. (This is, in fact, a matter of some concern in the wider EMC context – see for instance Cenelec TR 50627:2015, Study Report on Electromagnetic Interference between Electrical Equipment/Systems in the Frequency Range Below 150 kHz).

This means that there will almost certainly be emissions below 150kHz which will be over the CE102 limit, and which may not be obvious from a commercial test report, which won’t show these lower frequencies. The identified mitigation measures would mean extra filtering on the supply input. But low frequency, high power filters are massive. Large chokes and large capacitors are needed. Space and weight penalties are inevitable.

That’s not the only problem: add-on filter design for frequencies of a few tens of kHz will encounter at least two other issues. One is earth leakage current. Capacitors to earth, to deal with common mode emissions, also cause supply frequency leakage. This can be a safety issue and also upsets earth leakage detection circuits, so there is usually a limit on the maximum capacitance that can be allowed. With regard to naval systems, MIL-STD-461F para 4.2.2 says

The use of line-to-ground filters for EMI control shall be minimized. Such filters establish low impedance paths for structure (common-mode) currents through the ground plane and can be a major cause of interference in systems, platforms, or installations because the currents can couple into other equipment using the same ground plane. If such a filter must be employed, the line-to-ground capacitance for each line shall not exceed 0.1 microfarads (μF) for 60 Hertz (Hz) equipment or 0.02 μF for 400 Hz equipment.

Then there’s the problem of filter resonance. A mismatch between an add-on filter and the existing filter in the equipment can create a resonance, typically at a few kHz, which actually amplifies the interference around that frequency. Rarely a problem for commercial products, it can cause unexpected difficulties when you are trying to meet low frequency emission limits: you put in a filter and it makes the emissions worse. To anticipate this, the best approach is to model the total filter circuit in a circuit simulation package such as Spice. But that requires knowledge of the filter component values, which is often not available.

The above comments apply particularly to MIL-STD-461F CE102. This measures voltage to the test ground plane across a 50μH LISN, just like the commercial test does, and so some comparison can be made. The UK DEF STAN 59-411 DCE01 test is different. It measures current to ground on each supply line, into a 5μH LISN. This makes a comparison with commercial standards harder. And, it covers a much wider frequency range: down to 500Hz (20Hz for aircraft), and up to 100MHz (150MHz for aircraft). The most stringent specifications are Land Class A or ship above decks, which apply a limit of 0dBμA (that’s 1 microamp!) from 1MHz (2MHz Land) to 100MHz. If you see this appearing in your specification, you can be sure that high frequency filtering requirements will be extreme; any switchmode noise, or microprocessor clock or data, cannot be allowed to get out into the power input. Careful mechanical grounding design as well as HF filtering and shielding of the supply will be essential.

Signal line conducted emissions

MIL-STD-461F does not have a test for signal line emissions. DEF STAN 59-411, on the other hand, does – DCE02. This is a common mode current measurement with the same limits as for the power lines, and it applies both to external cables and intra-system cables longer than 0.5m. If your cables are screened, then this test will exercise the quality of the screening; if there are any unscreened cables, then the interfaces to them will need to be filtered to prevent RF common mode noise. The test is not dissimilar to CISPR 22’s telecom port emissions test, but over a much wider frequency range and with a more universal application, not to mention generally tighter limits.

Although the US MIL STD doesn’t explicitly test signal line emissions, don’t make the mistake of thinking that the signal lines can therefore be ignored as a coupling path. They will contribute to radiated emissions and the radiated tests will pick these up. Whatever your test programme, the cable interfaces are a critical part of the overall test setup. One common error, having put a reasonable amount of effort into the design of the equipment enclosure(s), is to ditch all the good work by throwing in any old cable that comes to hand when the EMC test is looming. Always ensure that you are using, if not the actual cables that will be used on the final installation, at least a cable set that is equivalent in screening terms.

Radiated emissions

The military standards divide the radiated emissions requirements into two parts, for magnetic field and for electric field. Most of the commercial standards don’t: they only measure the electric field, from 30MHz upwards. There are some exceptions to this, which also measure the magnetic field below 30MHz, generally down to 9kHz. Certain lighting products under CISPR 15 are one example; marine equipment is another; and some users of CISPR 11 (industrial, scientific and medical) are also subject to this.

Magnetic field: RE101, DRE02

The military magnetic field emissions test is quite different from any other radiated test (except for the complementary magnetic field susceptibility test).

Image may be NSFW.
Clik here to view.
Fig3

The method relies on a search coil being swept over the surface of the EUT to find regions of high field strength, and the field is then measured at a distance of 7cm from the surface. This method depends very much on the skill of the test engineer in finding the locations of highest emission. Both the UK DEF STAN and the US MIL STD use the same method, but their limits are different; for naval applications the MIL STD is generally more stringent, but otherwise the DEF STAN is.

The measurement is a test of low frequencies (up to 100kHz) in the near field. As such, the most likely sources will be magnetic components, particularly mains or switchmode transformers, or solenoids or motors, with high leakage flux. Commercial components almost never have to worry about emissions at these frequencies, so you will mostly have no handle on whether or not a particular component will actually be a threat, unless you do your own pre-compliance measurements in advance.

Mitigation measures in case of excess levels are fairly limited. Screening would require a thickness of magnetic material such as mu-metal or in milder cases, steel; occasionally a copper tape shorted turn around the outside of an offending magnetic core can help. If the emission can be traced to a particularly poor wiring layout – that is, high currents passing around a large loop area – then re-routing the wiring or using twisted pair will help. Otherwise, it’s a matter of finding an equivalent component with lower leakage flux than the culprit, or tackling the client for a waiver, based on the distance away from the EUT (greater than 7cm) where the unit does meet the limit.

Electric field: RE102, DRE01, DRE03

The military E-field measurement is more comparable to the well-known (in commercial circles) CISPR test, but only slightly. There are so many differences that a direct comparison of the two is really a mug’s game, even though in the context of gap analysis it would be highly desirable. We can visit the differences roughly as follows.

Physical layout, test distance and procedure

The CISPR test deliberately tries to ensure that the measuring antenna is in the far field of the EUT, with a minimum distance of 3m and a preferred distance of 10m – although the latter is not so common, given the much higher cost of suitable chambers that can accommodate it. This will indicate the interference potential of the EUT in the majority of commercial situations, when the vicitm’s antenna is well separated from the source. By contrast, the military and aerospace requirements place the measuring antenna at a fixed 1m from the EUT. The usual justification for this is that on many such platforms (aircraft, land vehicles and ships) the victim antennas are much closer. It also makes the measurement easier and, with low emission limits, aids in the fight against the noise floor of the test instrumentation.

It might be assumed that converting measurements from a far-field 3 or 10m to a near-field 1m, in order to directly compare a commercial result to a military limit, would be a simple matter of adding 9.5 or 20dB, on the assumption that the signal strength is inversely proportional to distance. In rare cases this might actually be true, but it certainly isn’t generally the case – the “conversion” factor can swing widely either side of this figure. So the default approach, using the above assumption, already introduces substantial errors. To understand why, you need to have a detailed insight into the electromagnetic field equations, which isn’t the purpose of this article.

Distance is just the start. The MIL-STD and DEF STAN test layout requires the EUT to be mounted on, and grounded to (if appropriate) a ground plane bench with its cables stretched out for 2m at a constant height of 5cm above the plane, before terminating in LISNs for power cables or in appropriate support equipment or the wall of the chamber, for signal cables. The CISPR radiated emissions test is different in every one of these aspects.

Image may be NSFW.
Clik here to view.
Fig4

Image may be NSFW.
Clik here to view.
Fig5

As well as this, the CISPR test requires the EUT to be rotated to maximise the emissions in azimuth.  This isn’t a requirement of the military method, although it does leave open a requirement for maximization in orientation, without specifying how, other than “encouraging” the use of a pre-scan to identify the face of maximum emission.

Frequency range

The CISPR frequency range for radiated emissions starts at 30MHz and extends upwards to 1GHz or beyond, to 6GHz, depending on the highest internal operating frequency of the EUT. The military range is 10kHz to 18GHz, although not all applications require the whole range. Below 30MHz and above 6GHz there will be no data on commercial equipment performance.

Detector type and bandwidth

The military tests use the peak detector; the CISPR tests use quasi-peak (QP) for radiated emissions up to 1GHz, and peak above this. Both types give the same result on continuous interference signals but the QP gives up to 43.5dB relaxation to pulsed signals, depending on their pulse repetition frequency (prf). So, if a unit emits low-prf signals which rely on the QP detector to pass CISPR limits there will be a significant extra burden in converting these results to the military limits. To do so quantitatively, you would need to know the characteristics of interference sources at each frequency.

The measurement bandwidths are also different: 100kHz for the military standards from 30MHz to 1GHz, versus 120kHz for CISPR. Above 1GHz both use 1MHz. By comparison with other sources of error, the extra fraction of a dB potentially measured by CISPR below 1GHz is negligible and can be safely ignored.

Limits

In general, the CISPR limit lines are well established and for the majority of applications there are really only two sets: Class A and Class B, the latter being more onerous by 10dB and applicable to residential or domestic situations, Class A being applicable to nearly everything else. Military standards have many more variations depending on application, added to which the frequency ranges and levels can be modified by the customer’s contract. Because of these variations, it’s not generally possible to say that one set of standards is more or less onerous than the other, although it’s to be expected that any equipment mounted externally to the platform will have much more strict requirements than any commercial application. Additionally, DEF STAN 59-411 has a separate test, DRE03 between 1.6MHz and 30MHz (88MHz for man-worn equipment) – intended to mimic the use of equipment in close proximity to Combat Net Radio (CNR) installations used by the Army – which uses a tuned antenna representative of the Army’s radios. The antenna is “significantly more sensitive than the broadband antennas used in Test Method DRE01” and this allows even lower limits to be put in place for this application.

In summary, though, it is very difficult to make an accurate determination of whether a given piece of commercial equipment will meet military radiated emissions requirements through gap analysis. From the point of view of project planning, there are two principal coupling paths that radiated emissions will take from the equipment. One is radiation directly from the enclosure, the other is radiation from cables connected to the enclosure. Therefore, areas of the design that need the most work to control these emissions will include cable screening and enclosure screening. The more critical the limit levels – and the wider the frequency range – the more important it is to actively design the enclosure for screening effectiveness; and also to ensure that cable and interface construction maintains the screening effectiveness. The two areas are complementary, one will be useless without the other. Conductive gaskets for seams and connector shells in the enclosure need to go hand in hand with selection of screened cable and termination of that screen to the mating connector. The trick lies in understanding that here we are dealing with the electrical performance of mechanical components, and so both areas of design must be evaluated.

Marine equipment

Before leaving the subject of limits, one non-military application where it is possible to apply a gap analysis is in radiated emissions of civil marine equipment. The marine standard IEC/EN 60945 uses substantially the same method as the normal CISPR tests and CISPR results can be compared, almost directly. It would be quite typical to want to use CISPR Class A-compliant equipment – such as video monitors or network switches – on board a ship. The two limits are compared below. Note that over much of the frequency range the marine requirements are quite relaxed. But there’s one exception: the VHF marine band, 156-165MHz, where they are anything but relaxed. This can cause substantial headaches for marine system builders, and it’s as well to be aware of this at the outset.

Image may be NSFW.
Clik here to view.
Fig6

RF susceptibility

Radiated

The coupling routes for radiated RF susceptibility issues are generally the reciprocal of those for radiated emissions, and therefore shielding design techniques will work in both directions. However, the internal circuits which are affected by high level applied RF fields may be quite different from those which create emissions. It is usual for power switching circuits and digital processing to create RF emissions while being relatively unaffected by incoming RF; in contrast, low-level analogue circuits, typically for transducer or audio processing, will not create emissions but may be affected by millivolts of RF. Therefore for many products there can be different areas which are relevant for one or other phenomenon.

Applied RF field levels, as with emissions limits, can show wide variations depending on the required standard and application. DEF STAN 59-411 DRS02 has a “Manhattan skyline” of levels versus frequency, ranging from typically 10V/m at low frequency to 1000V/m, pulse modulated, in the microwave region, if equipment will be sited potentially in the main beam of a radar transmitter (2000V/m for aircraft). MIL-STD-461 RS103 has a rather more uniform set of requirements, ranging from 10V/m for ships below decks to 200V/m for aircraft. RTCA DO-160, which is widely applied for civil aircraft equipment and often for military ones too, has a massive table (rather than a Manhattan skyline graph, but to the same effect) which defines susceptibility levels versus frequency for 12 different categories of equipment, with the least severe being 1V/m and the most severe being 7200V/m. Frequency ranges are tailored to the application as well, but can extend from 10kHz to 40GHz.

Compare this to the majority of commercial standard requirements, which are generally fixed at 3V/m for residential and 10V/m for industrial and marine applications; railways push the boundary to 20V/m, and the basic standard IEC 61000-4-3 proposes a maximum of 30V/m. And the frequency range for these radiated requirements starts at 80MHz and goes up to 2.7GHz at the most, with much lower levels being the norm above 1GHz; it may be fair comment that most commercial products aren’t expected to find themselves in the main beam of a surveillance radar. (It’s noteworthy that the commercial tests refer to immunity, whereas the military/aerospace ones refer to the same phenomena as susceptibility.)

Relating these levels means that the more stringent specifications will demand a high degree of extra shielding, which has to be allowed for in the initial design choices. Trying to add or improve shielding later in the mechanical design is always going to create serious headaches.

Test method

As with radiated emissions, there are differences in the susceptibility test procedure too. The military test layout stays constant between emissions and susceptibility. The CISPR/IEC layouts are different, much to the chagrin of test labs who have to re-position equipment between tests and sometimes use different chambers in order to maintain compliance with the specifications. But for the radiated susceptibility test, the biggest issue lies in how the field strength is controlled. For the tests in MIL-STD-461 and DEF STAN 59-411 (but not DO-160) the applied field strength at a probe near to the EUT is monitored and controlled during the test. For DO-160 and the commercial test to IEC 61000-4-3, the field is pre-calibrated in the absence of the EUT and the same recorded forward power is replayed during the test. These two methods can produce fundamentally different results, for the same specification level in volts per metre, depending on the nature of the EUT.

In addition to this, there are differences in the modulation that is applied to the RF stress. The military and aerospace tests prefer 1kHz square wave modulation, but also pulse modulation where it is relevant (at radar frequencies), along with other more specific types of modulation in some cases. The IEC 61000-4-3 test uses only 1kHz sinusoidal modulation; but it does require 80% modulation depth, which effectively raises the peak applied stress level to 1.8 times the specification level. In this narrow sense, the commercial test is more stressful than the military, for a given spec level.

But when you are looking at a test report for a COTS product, for any immunity test one of the most important questions is: how was the product monitored during the test, and what criteria were applied to distinguish a pass from a fail? In many test reports, this information is so vague as to be of no help whatsoever, yet it is actually what determines the suitability of that equipment for the application. No test standard will specify the performance criteria in detail; that is the job of the test plan for that product. The report should reference the test plan and where necessary reproduce its detail. Few commercial test reports do this – often, one suspects, because there never was a test plan in the first place.

Conducted RF

Similar issues apply to the specifications for conducted RF susceptibility. Direct comparisons are harder because commercial standards, based around IEC 61000-4-6, apply a voltage level to the cable from a source impedance of 150Ω; virtually all other standards use the method of “bulk current injection” which applies a current level via a clip-on current transformer. Relating the two is only possible if you know the common mode input impedance of the interface you are testing. It would be fair to say that even the designers most familiar with their product will be guessing – it’s not a feature which is necessary to know for the functioning of the equipment, even though it has a direct impact on EMC performance.

Beyond this, as with radiated RF susceptibility, the military and aerospace requirements have a smörgåsbord of levels versus frequency for different applications. DO160 has a maximum of 300mA, MIL-STD-461 CS114 has 280mA and DEF-STAN 59-411 DCS has 560mA for their most severe applications. If we take the power into the 50Ω calibration jig, this is between 4 and 15 watts. Compare this with the typical 10V emf from 150Ω which gives 5Vrms into a 150Ω calibration, as required by IEC 61000-4-6, which is only 166mW.

Design mitigation measures for this test are limited to two principal approaches: effective screening of cables, and effective filtering of interfaces. One can possibly substitute for the other, but a combination of both is the best method. Any weakness in cable screen termination will allow interference through to the screened circuit within, which can be mopped up by a moderate degree of filtering. The overall protection, though, must work across the frequency range of 10kHz to 400MHz (200MHz for MIL-STD-461). Designing a filter which will deal with this spectrum, even for power supplies, is not trivial and usually the most cost-effective approach is the combination.

Lightning and transient susceptibility

The US MIL-STD-461 doesn’t have an explicit lightning surge test, at least up to issue F. One is under discussion for issue G. The UK DEF STAN 59-411 does have some serious surges, including one for lightning (DCS09), but the most comprehensive is that in section 22 of DO-160. This has been uprated in every re-issue of the standard and is now one to challenge any aircraft equipment designer – if, that is, you can decode the arcane instructions for how to select the various levels and waveforms.

This author was privileged to visit the lightning surge test facility of a major Chinese telecomms supplier a few years ago (the capacitor bank alone filled most of the room – “Please, Mr Tim, make sure to stand well back when I press this button”) – there are regions of the world where thunderstorms are the norm rather than the exception, and telecom towers are natural attractors for the strike. So some parts of the facility had to be tested with the full whack. Aircraft, naturally, can’t be trusted not to fly near to or even under (not into) a thunderstorm occasionally. But practically, the test levels need to be tailored more to the currents that would be expected on the wiring interfaces, which in turn depends on the unit’s location in or outside the aircraft, the level of hardening required and the type of construction of the aircraft’s structure.

DO-160 version G has two methods in its section 22: pin injection, and cable bundle tests. The first is a “damage tolerance” test; the second evaluates “functional upset tolerance” and has a variety of waveforms including single and multiple stroke and multiple bursts. The best commercial equipment may be specified to cope with the IEC 61000-4-5 lightning surge test, but this does not include pin injection, and its surge waveform does not reflect the variety of waveforms required by DO-160. Extra interface protection will be needed if you are trying to use such equipment in this application.

Pin injection

For this type of test, the EUT needs to be powered up and operating, but its connectors (except for the power supply) are disconnected and the specified transient pulse is applied, ten times in each polarity, between each designated connector pin and the ground reference. Thus cable screening is of no relevance for this test. If the power input is to be tested, the transient is applied in series with the supply voltage, with the external supply source properly protected.

Three different waveforms are specified, with five possible levels. The most stressful in energy terms is waveform 5A, which is a 40/120μs surge with a 1Ω source impedance and a peak voltage from 50V (Level 1) to 1600V (Level 5 – this one is fairly rare, but level 4 at 750V is not uncommon). The other waveforms (labelled 3 and 4) are a 1MHz damped sinusoid and a faster unipolar transient, from higher source impedances.

Cable bundle

These tests are intended to check for both damage and upset, and work by injecting transients into each cable interface as a whole, either via a current probe or by a connection between the unit enclosure and the ground reference of the test. In this case, cable screening effectiveness is crucial. In a well-screened system the surge will pass harmlessly down the cable screen and around the EUT enclosure without impinging on the inner circuits. Successfully beating this test then means ensuring that both the cable assembly and the enclosure have a low transfer impedance, to prevent common mode currents developing internal voltages. To put a sample figure on this, a transfer impedance of 50mΩ/m, typical of a good quality single-braided screen at 10MHz, when faced with a 1000A surge down the screen, will create a 50V pulse in common mode on the cable’s internal circuits per metre length. A double-braided screen could drop this to 1mΩ/m and hence 1V. But that demands extreme care in assembling the connector screening shell.

Alternatively, if a cable is unscreened, the interface has to absorb or resist the injected surge in the same way as for pin injection. And in addition, the unit has to continue operating without upset, so any pulse propagating into the circuit must not affect its operation.

The cable bundle tests use the same waveforms (3, 4 and 5) as before, with possibly two more, waveform 1 being the current equivalent of waveform 4 (6.4/69μs) and waveform 2 being a faster one with 0.1μs risetime. Single stroke, multiple stroke (one large plus thirteen smaller transients) and multiple burst (three bursts of 20 damped sinusoid transients, every 3 seconds, for at least 5 minutes) application are specified. Waveform 6 applies to low impedance cable bundles in place of waveform 3 for multiple burst tests. The levels vary from 50V/100A for the mildest to 1600V/3200A for the most severe, and that’s just for waveform 1.

Transient protection

Given that your specification, once it has been decoded, will be quite explicit as to the levels and pins to be tested, you can design in protection with a good knowledge of what it will be protecting against. There are two main techniques. The first is isolation: if all interfaces are isolated from ground, and the isolation barrier can withstand the full surge voltage, this is a good start. But remember that there is a dv/dt associated with the surges and therefore capacitance from each circuit to ground and/or through the isolation barrier is also important, since the edge of the stress waveform will be coupled through this capacitance to the circuits. If you are relying only on isolation, it is necessary to identify parasitic and intentional capacitances to the ground reference (usually the enclosure) and make sure that the paths these provide are innocuous.

The second method is clamping the surge voltage with transient suppressors to ground. Since energy will be deposited in the suppressor on each test, you need to size it appropriately for the specification level to be sure it is not over-stressed. You also need to be sure that the downstream circuits can withstand the peak voltage that the suppressor will clamp to. Whilst they can be effective, suppressors aren’t suitable for all types of circuit, particularly wideband interfaces. And complementary to the comment above on capacitance, when a suppressor clamps a transient, it passes a di/dt current pulse; any inductance in series will create a secondary transient voltage, from V = -L·di/dt, which may be at damaging levels. Short leads and tracks, going directly to the right places (perhaps the enclosure, perhaps local circuit nodes) are critical.

Another more sophisticated approach is to block the surge path but only during its actual occurrence, using a series-connected high voltage MOSFET and control circuitry. Given the relatively slow risetime of the waveforms this is feasible, but it is complex, and has to take into account both surge polarities. It may well be necessary in high-reliability applications when you simply cannot use a transient suppressor, because if a suppressor is destroyed open-circuit this will not be apparent – the equipment will carry on operating normally – but you will have lost surge protection without knowing it. (A related question is, is it better for a transient suppressor to fail open circuit or short circuit? It could fail either way, but the consequences are wildly different.)

Equipment categories

The actual tests and levels applied depend on the application and need to be clearly specified in the procurement documentation. This should be “consistent with its expected use and aircraft installation”. The specification should consist of six alphanumeric characters as shown below. The first, third and fifth letters are “waveform set designators”, determining which tests are to be done with which waveforms – decided by the type of aircraft and whether or not the cables are screened – and the other numbers n, i and j apply levels for each set of tests, depending on where in the aircraft the equipment will be mounted.

Image may be NSFW.
Clik here to view.
categories

X in the specification means no tests are to be performed, Z means that the test is performed at levels or with methods different from the standardized set.

As a very, very approximate first pass judgement, the higher the numbers and the further into the alphabet go the letters, the more severe is the test. But you can see that the complexity is daunting and every case has to be analysed in its own right.

Other transient tests

By comparison with DO-160 section 22, other military transient tests are fairly straightforward, although not necessarily lenient. In most cases the transients are applied by current probe on cable bundles; this is the case for MIL-STD-461 CS115 and 116, and for DEF STAN 59-411 DCS04, 05, and 08. DCS06 is applied on power supply lines individually, as is also DCS08. The nearest test to the DO160 lightning tests is DEF STAN 59-411 DCS09, which applies similar and in some cases identical waveforms. A wry note at the beginning of DCS09 says

When testing with the Long Waveform, in particular, it is advisable for personnel in the vicinity of the EUT to wear eye protection. Some components have been known to explode and project debris over distances of several metres. Some types of pulse generators can produce a high intensity burst of noise when they are fired. Operators, trials engineers and observers should be made aware of this and advised to wear ear protection.

Transient suppressors, where they are used, should be sized appropriately.

Supply voltage ratings

As a final piece of light relief, don’t forget that some equipment specifications make substantial demands on the power supply’s robustness. We’ve already mentioned harmonic current limits at the beginning of this article, but there are other power input issues which do, strictly speaking, fall under the umbrella of EMC. They don’t appear in MIL-STD-461 or DEF STAN 59-411, but there are other military requirements which are relevant, for instance MIL-STDs 704 and 1399, or DEF STAN 61-5, or STANAG 1008. RTCA DO-160 for aerospace has section 16 (which does include harmonic current limits); and in another industry, railway rolling stock equipment has to meet EN 50155. Mostly, these specify the degree to which input voltage dips and dropouts can be expected, along with the levels of under- and over-voltage in normal and abnormal operation. The last of these is often the most stressful.

DO-160 section 16’s abnormal over-voltage requirement for Category Z (the most severe) on 28V DC supplies demands that the power input withstand 80V (+185%) for 0.1 second and 48V (+71%) for 1 second. Even the least severe, Category A, requires +65% for 0.1 second and +35% for 1 second. For EN 50155, a voltage surge on a battery supply of +40% for 0.1 second should cause no deviation of function, and for 1 second should not cause damage. Overvoltages for such durations can’t be clamped by a transient suppressor; suppressors, if used, need to be rated above the peak voltage that will occur for these conditions. But at the same time, it can be difficult to design power supplies for such levels using conventional SMPS integrated circuits. Instead, a discrete pre-regulator – which might also implement other functions such as soft start and reverse polarity protection – is a common solution. So the complete power supply input scheme for a DC supply is likely to look like that shown below, and the early components need to be substantially over-rated compared to the normal operating voltage.

Image may be NSFW.
Clik here to view.
Fig7

Conclusion

With all the above requirements, limits and levels in mind, if you are expecting to use any commercially-produced apparatus within a system that needs to comply with military/aerospace requirements, or others for which it wasn’t designed, it should by now be obvious that right at the beginning of the design process, you must start with a detailed review of the procurement contract’s EMC specification before you get involved in any negotiations on contract price. EMC requirements have the potential to make or break a project – they must be respected and their implications for schematic, PCB and mechanical practices understood.

Tim Williams
Elmac Services
May 2015
www.elmac.co.uk
mailto:consult@elmac.co.uk


Automatic verification of EMC pulse test equipment by means of the LabView programming environment

By Krzysztof SIECZKAREK1, Adam MAĆKOWIAK1
1Institute of Logistics and Warehousing, Laboratory of Electronic Equipment, Poznań, Poland, LA@ilim.poznan.pl

Abstract: The aim of this work was to create an application dedicated to the automatic verification of test equipment used in electromagnetic compatibility testing. In order to perform it the LabView programming environment was chosen – a versatile tool with a graphical programming language. This paper presents the normative requirements analysis, LabView environment outline and description of created software.

Keywords: EMC, equipment verification, transient signals, burst, surge, LabView software.

 

Introduction

The operation of an accredited laboratory dealing with electromagnetic compatibility testing requires necessity of periodic verification of test set-ups. For the sake of quite big amount of parameters to check such verification is time-consuming and needs a good knowledge of testing equipment (for instance digital oscilloscope) both in the range of its maintenance and protection as well as in the scope of fitting the measurement paths into the high-voltage signals. For these reasons Institute of Logistic and Warehousing decided to create such a tool for automatic verification which will minimize personnel participation in periodic verifications and short its duration.

Within this work there were made applications to automatic verification of:

  • electrical fast transient/burst immunity test equipment,
  • surge immunity test equipment,
  • pulse magnetic field immunity test equipment.

 

Normative requirements

Surge immunity test

Test methods and immunity requirements of electric equipment subjected to surges are well described in International Standard [1]. This standard gives also specification and requirements for test generators and voltage waveforms. The values for the different components are selected so that the generator delivers a 1,2/50 ms voltage surge (at open circuit conditions) and a 8/20 ms current surge into a short circuit.
The real waveform of “1,2/50 ms” open-circuit surge voltage is shown in figure 1. The summary requirements for surge generator verification are given in table 1.

Image may be NSFW.
Clik here to view.
Figure 1. The real waveform of “1,2/50 s” open-circuit surge voltage.

Figure 1. The real waveform of “1,2/50 μs” open-circuit surge voltage.

 

Table 1. Requirements for surge generator verification.

Image may be NSFW.
Clik here to view.
Table1

 

 

 

 

 

 

Test methods and immunity requirements of electric equipment subjected to transient disturbances are described in International Standard [2]. This standard gives also specification and requirements for test generators and voltage waveforms. Real graphs of a fast transient are shown in figures 2 and 3.Electrical fast transient/burst immunity test

Image may be NSFW.
Clik here to view.
Figure 2. Waveform of a real fast transient/burst.

Figure 2. Waveform of a real fast transient/burst.

Image may be NSFW.
Clik here to view.
Figure 3. Waveform of a real single burst pulse.

Figure 3. Waveform of a real single burst pulse.

The summary requirements for fast transient/burst generator verification are given in table 2.

Table 2. Requirements for fast transient/burst generator verification.

Image may be NSFW.
Clik here to view.
Table2

 

 

 

 

 

 

 

 

 

 

Pulse magnetic field immunity test

Test methods and immunity requirements of electric equipment subjected to impulse magnetic fields are described in International Standard [3]. This standard gives also specification and requirements for test generators and voltage waveforms. The real waveform of current in induction coil is shown in figure 4.

Image may be NSFW.
Clik here to view.
Figure 4. Waveform of real short-circuit current.

Figure 4. Waveform of real short-circuit current.

The summary requirements for impulse magnetic field generator verification are given in table 3.

Table 3. Requirements for impulse magnetic field generator verification.

Image may be NSFW.
Clik here to view.
Table3

 

 

 

 

 

 

LabView software

LabVIEW (Laboratory Virtual Instrument Engineering Workbench) is a graphical development environment for data acquisition, analysis, and presentation from National Instruments.  Distinct from application like C or Pascal it does not use text-based languages to create lines of code. LabVIEW uses a graphical programming language, G, to create programs in block diagram form. LabVIEW programs are called virtual instruments because their appearance and operation imitate physical instruments, such as oscilloscopes and multimeters. LabVIEW contains a comprehensive set of tools for acquiring, analyzing, displaying, and storing data, as well as tools for debugging.

All LabVIEW programs have a front panel and a block diagram. The front panel is the graphical user interface. This interface collects user input and displays program output. The front panel can contain knobs, push buttons, graphs, and other controls and indicators. The block diagram contains the graphical source code of application. In the block diagram one can program an application to control and perform functions on the inputs and outputs created on the front panel. The block diagram can include functions and structures from the built-in LabVIEW libraries. It also can include terminals that are associated with controls and indicators created on the front panel [4].

The front panel of basic application generating sine wave is shown in figure 5; its block diagram is shown in figure 6.

Image may be NSFW.
Clik here to view.
Figure 5. Typical front panel of LabVIEW application.

Figure 5. Typical front panel of LabVIEW application.

Image may be NSFW.
Clik here to view.
Figure 6. Block diagram of application shown in figure 8.

Figure 6. Block diagram of application shown in figure 8.

 

Applications

Verification of surge generator

Verification of surge generator was carrying out in set-up shown in figure 7.

Image may be NSFW.
Clik here to view.
Figure 7. Set-up for surge generator verification.

Figure 7. Set-up for surge generator verification.

Notebook with LabVIEW software controls the generator and oscilloscope by use of GPIB interface. In order to protect from overvoltage the oscilloscope input was connected to line output of generator through the active probe. An application consisted of three functional parts:

  • initialisation,
  • configuration,
  • data acquisition.

In initialisation part general information how to create the test set-up are shown. After that application checks test equipment and initialises communication.

In configuration part user can set the parameters of the test, such as surge voltage, impulse polarisation, repetition number. On the basis of generator settings an application automatically set the oscilloscope in order to ensure optimal signal sampling and recognition. Oscilloscope can also be set manually. Settings of equipment can be read and write to file.

The main part of application is data acquisition and processing module. After oscilloscope triggering appropriate quantities are measured and shown in application panel. The quantities are shown as presently measured values and as a mean value at given moment of measurement cycle. In panel is also shown recorded waveform (in three different time-bases). Such constructed loop is repeated until it reached number specified by user. Front panel of surge generator verification application is shown in figure 8.

Image may be NSFW.
Clik here to view.
Figure 8. Front panel of  surge generator verification software.

Figure 8. Front panel of surge generator verification software.

In case of short circuit current measurements the connection between oscilloscope and generator is changed – instead of P6249 probe the current transformer is used. Measuring quantities and front panel arrangement are the same.

 

Verification of burst generator

Test set-up for burst generator verification is shown in figure 9.

Image may be NSFW.
Clik here to view.
Figure 9. Set-up for burst generator verification.

Figure 9. Set-up for burst generator verification.

Notebook with LabVIEW software controls the generator and oscilloscope by use of GPIB interface. An application also consisted of three functional parts.

In initialisation part general information how to create the test set-up are shown. After that application checks test equipment and initialises communication.

In configuration part user can set the parameters of the test and select an attenuator. On the basis of generator settings an application automatically set the oscilloscope in order to ensure optimal signal sampling and recognition. Because of big amount of quantities to be measured there is no possibility of manual oscilloscope settings.

The main part of application is data acquisition and processing module. In order to measure all needed parameters the burst waveform is analysed in 5 different time-bases in one measurement cycle. After oscilloscope triggering appropriate quantities are measured and shown in application panel. Quantities are shown as presently measured values and as a mean value at given moment of measurement cycle. In panel are also shown recorded waveforms. Such loop is repeated until it reached number specified by user. Front panel of burst generator verification application is shown in figure 10.

Image may be NSFW.
Clik here to view.
Figure 10. Front panel of burst generator verification software.

Figure 10. Front panel of burst generator verification software.

 

Verification of magnetic field generator

Test set-up for magnetic field generator verification is shown in figure 11. A notebook with LabVIEW software controls the generator and oscilloscope by use of GPIB interface. Current of induction coil is measured by use of current transformer with divider 1A = 0,01 V.

Image may be NSFW.
Clik here to view.
Figure 11. Front panel of magnetic field generator verification software.

Figure 11. Front panel of magnetic field generator verification software.

An algorithm, measured values and front panel appearance are the same as in current measurement part of surge verification software.

 

Conclusion

The aim of work described in present article was to create software for automatic verification of selected equipment used in electromagnetic compatibility laboratory. Three following applications were made:

  • software for verification of electrical fast transient/burst generator,
  • software for verification of surge generator,
  • software for verification of pulse magnetic field generating system.

All above applications were created by use of LabVIEW graphical development environment. The creation of these applications process was at the same time a good test of possibilities of practical LabVIEW application in laboratory. It appears that its versatility, easy of use and big amount of existing libraries make it a suitable tool for electronic equipment controlling. Additionally, creating an application in LabVIEW environment does not need advanced programming skills.

All applications created within this work were put into practice in EMC Laboratory at Institute of Logistic and Warehousing and also are available for other EMC test labs. All of them standing the test of time simplifying the verification process and making it considerably shorter.

 

References

[1]   EN 61000-4-5 Electromagnetic Compatibility – Testing and measurement techniques – Surge immunity test.

[2]   EN 61000-4-4 Electromagnetic Compatibility – Testing and measurement techniques – Electrical fast transient/burst immunity test.

[3]   EN 61000-4-9 Electromagnetic Compatibility – Testing and measurement techniques – Pulse magnetic field immunity test.

[4]   LabView7 Express – Getting Started with LabVIEW, National Instrument, April 2003 Edition.

 

About Company

Image may be NSFW.
Clik here to view.
pic
The Laboratory of Electronic Devices of Institute of Logistics and Warehousing carries out the research work and deals with technical assessment in the area of electrical safety and electromagnetic compatibility EMC of electronic and electrical equipment. Laboratory holds
accredited quality management system according to EN ISO/IEC 17025. Laboratory is notified by European Commission to act as Notified Body under 2004/108/EC EMC Directive.

 

Authors

Krzysztof Sieczkarek graduated from Poznan University of Technology in 1994, PhD in 2003. He has been working in EMC test laboratory of Institute of Logistics and Warehousing in Poznan since 1994; present position – laboratory manager. Member of IEEE EMCS, FOR-EMC, IEC TC 77 and CISPR I.

Adam Maćkowiak graduated from Poznan University of Technology in 1997, PhD in 2001. He has been working in EMC test laboratory of Institute of Logistics and Warehousing in Poznan since 2003; present position – technical manager.

2016 Europe EMC Guide

Image may be NSFW.
Clik here to view.
Cover
The 2016 Interference Technology EMC Europe Guide is now available. You will find articles on Why the ANSI/ESD 1-foot rule needs to be changed; Mobile Generations Explained; CISPR 32 – Multimedia Equipment Emission Requirements; Is EMC Ready for the Internet of Things?; EMC Project Management and COTS; and more.

Articles are available in English as well.

View the issue here.

Have Suspect Counterfeit ESD Packaging & Materials Infiltrated the Aerospace & Defense Supply Chain?

Have Suspect Counterfeit ESD Packaging & Materials

Infiltrated the Aerospace & Defense Supply Chain?

 

Bob Vermillion, CPP, Fellow

Certified ESD & Product Safety Engineer – iNARTE

RMV Technology Group, LLC

NASA-Ames Research Center

Moffett Field, CA 94035

 

According to the 12 February 2016 edition of the EE Times, President Barak Obama indicated a day earlier that he will sign into law a customs bill passed by the U.S. Senate that includes a provision to combat counterfeit semiconductors (Figure 2)[1]. This will be called the Trade Facilitation and Trade Enforcement Act of 2015 (H.R. 644/S.1269). This bill will mandate that U.S. Customs & Border Protection share information and samples of suspected counterfeit EEE parts for inspection and testing that are identified as counterfeits. In 2011, the Semiconductor Industry Association estimated that counterfeiting costs U.S. based semiconductor companies more than $7.5 billion per year.

Over the past several years, U.S. based organizations have sacrificed the traditional “internal auditing process” with reliance upon contract manufacturers, distributors and suppliers to do the right thing. To compound the problem, organizations have accepted supplier specifications as adequate proof in qualifying a product for use. The inspection of ESD sensitive devices or EEE parts is very important, but without special safeguards, the additional handling to remove and repack a product for validation can cause both physical and ESD damage in the process. For electronic ESD sensitive components, including those not sensitive to static electricity, measures must be utilized to detect, inspect and validate the packaging as well as incoming EEE parts.

Image may be NSFW.
Clik here to view.
Vermillion_Fig1

Different Types of Static Control Packaging types, Label and Indication Card in Table 1

Figure 1

 

Blister Pack

Antistatic Pink or Blue Poly Bags

Conductive Carbon Loaded Bags

ESD Static Shielding Bags

ESD Aluminum Moisture Barrier Bags (Type I)

ESD Grid Bags

ESD Corrugated Containers

ESD Polymer (Plastic) Boxes

ESD Paperboard

ESD Plastic Corrugated (Extruded)

Plastic Hinged Clear Antistat Coated Boxes

Plastic Hinged Conductive Carbon Loaded Boxes

Antistatic Clamshells

Antistatic Trays

Static Dissipative Trays

Inherently Conductive Polymer Trays

Carbon Loaded Trays

Carbon Coated Trays

ESD P E Films

Antistatic Films

ESD Foams

Cross-linked foams

Antistatic Pink Poly Pallet Wrap

ESD Cleanroom Paper

Antistatic 8” x `11” Paper

Antistatic Tape

Clear, Blue and Pink Antistatic Work Carriers

ESD Polystyrene Peanuts (Not Allowed)

ESD Safe Tape & Reel

ESD Rubber bands and straps

ESD Safe Antistatic Dip Tubes (IC Carriers)

Antistatic End Caps (Pink and Back)

ESD Symbol Labels

Antistatic Labels

ESD Wafer Boats

ESD Wafer Carrier

ESD Wafer Separators

ESD Torn Bag with Tubing, IC Carrier or Dip Tubes

Antistatic Bubble

Blue Dissipative Bubble

ESD JEDEC Trays

ESD Cordless Wrist strps

Humidity Indicator Cards

Sorbent (Absorbent) Pads, Antistatic

Table 1

 

The author was first to present on issues of suspect counterfeit ESD packaging & materials (Figure 1) in the DOD Supply Chain, NASA QLF 2010 (Quality Leadership Forum). No longer is a Supplier Technical Data Sheet enough to attest that a product is compliant to ANSI/ESD S541 (ESD Standard for Packaging and Materials). In 2012, Dr. Doug White (US Army, DAC) and I presented: “ESD Packaging for Supplier Non-conformance & The Importance of Proper Training & Qualification Testing as an Effective Countermeasure for Mitigation” at the National Institute of Packaging & Handling Logistics Engineers (NIPHLE) Annual Conference, Washington, DC. Consequently, due diligence in the initial test of a protective package constitutes a major first step toward supplier compliance.

Image may be NSFW.
Clik here to view.
Vermillion_Fig2

Tape and Reel Photograph showing components in carrier tray and clear film

Figure 2

 

Today’s US based products are now commonly substituted by offshore manufacturers without traceability in the global supply chain.  In contrast to aerospace & defense, the pharmaceutical sector is actively engaged in a sound packaging engineering approach that differentiates non-conforming or suspect counterfeit  products to be tracked, identified, inspected and then placed into quarantined (Figure 3).

Image may be NSFW.
Clik here to view.
Vermillion_Fig3

Static shielding bag housing IC Carrier Rails (Dip Tubes)

Figure 3

 

Scope of the Problem: Supplier non-conformance and suspect counterfeit packaging represents an electrostatic discharge (ESD) hazard to sensitive devices or components by generating high voltage discharges during transport, parts inspection and manufacturing. Several aerospace related issues involve long-term storage with antistatic foam, antistatic bubble, vacuum formed antistatic polymers and Type 1 moisture barrier bags.

The late John Kolyer, Ph.D. (Boeing, Ret.) and Ray Gompf, P.E., Ph.D. (NASA-KSC, Ret.) were advocates in the utilization of a formalized physical testing material qualification process. Today, the DoD, prime contractors and CMs rely heavily upon a visual inspection process for ESD packaging materials. Over the past decade, suspect counterfeit ESD packaging materials have entered the supply chain largely unnoticed due to the practice of accepting a Supplier Technical Data Sheet in lieu of testing.

A common practice of visually inspecting an outer package label in combination with bar code scanning has not prevented suspect counterfeit static control packaging from entering the DoD supply chain. To compound the matter, an inexpensive walnut blasting method to remove a component’s lettering is used by suspect counterfeiters  with little to no evidence of tampering as illustrated in Figure 4.

Image may be NSFW.
Clik here to view.
Vermillion_Fig4]

Walnut Blasting of Lettering from Rejected or Outdated EEE Part

Figure 4

 

Another countermeasure for detection is the use of RFID in packaging for incoming inspection and inventory tracking.  “Hands on” training is a reliable method to teach Incoming Inspection personnel in the use of advanced inspection techniques. For example, ESD sensitive components are typically protected by packaging that industry identifies by “color”, i.e. “Pink” or “Blue” for antistatic bubble and “Black” for carbon loaded polymer (JEDEC trays and Tape & Reel).  Color is no longer an indicator of static control packaging performance.  This identification marker is widely accepted by Aerospace & Defense.  A simple and cost effective electrical resistance test can be utilized to determine if packaging is compliant.

A counterfeiter is not motivated to package fraudulent ESD sensitive components in compliant ESD safe packaging as material costs can be 40% or more Whether the protective packaging is non-compliant or suspect counterfeit, the EEE device could be compromised.

Even though some Federal agencies may not use dip tubes in manufacturing, many primes, CMs and electronic distributors continue to source EEE parts housed in antistatic IC carriers that are not designed for long term storage.

Image may be NSFW.
Clik here to view.
Vermillion_Fig5

IC Carrier or Dip Tubes which were Quarantined by User

Figure 5

Since 1997, our lab has evaluated static control products and packaging for major federal agencies, commercial end users, OEMs, CMs and distributors.  For the past several years, many ESD materials and packaging from the Pacific Rim have failed standardized ANSI/ESD testing. For example, the reader can see that an “ESD labeled”  reel is  insulative at 1.5 x 1012 ohms in Figure 6; the limit is <1.0 x 1011 ohms. In addition, the reel charged to -15,080 volts that could be a cause for Field Induced Model discharge (FIM).

 Image may be NSFW.
Clik here to view.
Vermillion_Fig6

2-Point Resistance Test of Reel on Left and Non-Contact Voltage Reading on Right

Figure 6

 

As stated earlier, the RMV & U.S. Army Defense Ammunition Center (DAC) white paper for the NIPHLE Conference, Washington, D.C. produced the following:

  1. Fast Packs (Failed)
  2. Antistatic Bubble (Passed)
  3. Antistat Pink Poly Film (Failed)
  4. Type I Aluminum ESD Moisture Barrier Bag (Failed)
  5. Type III Metallized ESD Shielding Bag (Failed)

In short, initial qualification of a package or material must be followed by “periodic verification though physical testing.” Therefore, mission critical EEE parts and components that require ESD packaging should be re-validated on a periodic basis for EEE parts and components.

 

Bob Vermillion, CPP/Fellow, is a Certified ESD & Product Safety Engineer-iNARTE with subject matter expertise in the mitigation of Triboelectrification for a Mars surface and in troubleshooting robotics, systems and materials for the aerospace & defense, hand held devices, wearables, medical device, pharmaceutical, automotive and semiconductor sectors. Bob was recently elected to the Advisory Board Council of the Independent Distributors of Electronics Association, the governing body for IDEA-STD-1010B-2011.  A long standing member of the ESDA Standards Committee, Co-author of several ANSI level ESD documents, Co-Chair of the ESDA WG 19 Committee for Aerospace & Defense and Co-Chair of the SAE G-19 Packaging Sub-Committee for EEE Counterfeit Parts, Vermillion formerly served on the BoD with iNARTE.  Speaking engagements include Suspect Counterfeit Presentations/Seminars for NASA, DOE, Aerospace & Defense, California Polytechnic University, Loyola University and NASA Ames Conference on 3 May 2016 followed by his NIPHLE Training Conference presentations on 4 and 5 May 2016 . Vermillion is CEO and Chief Technology Officer of RMV Technology Group, LLC, a NASA Industry Partner and 3rd Party ESD Materials Testing, Training and Consulting Company. Bob can be reached at 650-964-4792 or bob@esdrmv.com.

 

InCompliance

By Bob Vermillion, CPP/Fellow

September 2014

 

The Silent Killer: Suspect/Counterfeit Items and Packaging

Over the past several years, U.S. based organizations have curtailed traditional internal verification efforts due to reliance on contract manufacturers, distributors and suppliers to do the right thing. The inspection of ESD sensitive parts is very important, but without special safeguards, the additional handling to remove and repack a product for validation can cause both […]

 

By Bob Vermillion, CPP/Fellow

The Dip Tube

Interference Technology

By Bob Vermillion, CPP/Fellow

June 1, 2010

This article illustrates that removal of ESD sensitive components from non-conforming or suspect dip tubes will generate ESD events.

Source (Page 72) http://www.interferencetechnology.com/the-dip-tube/  

 

JEDEC and Tape & Reel Issues      

Interference Technology UK

by Bob Vermillion, CPP/Fellow

November 2010  

Handling today’s architectures in combination with ultra sensitive electronic components packaged in suspect counterfeit or non-conforming materials leads to issues during the inspection process and in use. Issues in the handling of ultra sensitive (Class 0) ESD devices are discussed in this groundbreaking article.

Source URL:  http://www.interferencetechnology.com/jedec-and-tape-reel-issues/

 

Article Abbreviations or Acronyms:

 

ANSI – American National Standards Institute

CM – Contract Manufacturer

Dip Tube – IC Carrier

D0D – Department of Defense

ESD – Electrostatic Discharge

EEE parts – Electrical, Electronic (ESD Sensitive Devices) and Electromechanical

EE – Electrical Engineer

Fast Packs – Outer Sleeve weather resistant Fiberboard Stiched Container with Convoluted Foam Pad

IC – Integrated Circuit

JEDEC – JEDEC Solid State Technology Association, formerly known as the Joint Electron Device Engineering Council

JEDEC Tray – Waffle Tray or IC Matrix Carrier to transport, store and stage ESD Sensitive Devices

[All JEDEC matrix trays are 12.7 x 5.35 inches (322.6 x 136mm)]

KSC – Kennedy Space Center (NASA)

NASA QLF – NASA Quality Leadership Forum

NIPHLE – National Institute of Packaging & Handling Logistics Engineers

OEM – Original Equipment Manufacturer

Type 1 – Aluminum ESD Safe Moisture Barrier Bag (MBB), see Mil-PRF-81705E

Tape and Reel – A format for packaging, transporting, storing, and placing components and devices. The desired components and devices, such as capacitors or chips, are securely adhered to a tape which is wound upon a reel, providing a simple and protective manner of packaging, transporting, and storing. The reels can then be utilized with special equipment which provides for automatic insertion or placement of the parts so held. Its abbreviation is tape & reel packaging[2].

 

[1] http://www.nytimes.com/2016/02/12/business/international/sweeping-trade-enforcement-law-gets-final-senate-approval.html?_r=0

[2] http://www.dictionaryofengineering.com/definition/tape-and-reel-packaging.html

The post Have Suspect Counterfeit ESD Packaging & Materials Infiltrated the Aerospace & Defense Supply Chain? appeared first on Interference Technology.

Why Is There AIR (in MIL-STD-461G)?

Why Is There AIR (in MIL-STD-461G)?

Ken Javor, EMC Compliance January 2016

(with a tip of the hat to a great performer…)

As noted in the compleat MIL-STD-461G review also found in this issue of ITEM, SAE Aerospace Information Report (AIR), AIR 6236, In-House Verification of EMI Test Equipment was written specifically to support MIL-STD-461G.  Specifically, section 4.3.11 Calibration of measuring equipment has been reduced in scope to devices such as EMI receivers and spectrum analyzers, oscilloscopes and (RS103) electric field sensors.  Section 4.3.11 now says, “After the initial calibration, passive devices such as measurement antennas, current probes, and LISNs, require no further formal calibration unless the device is repaired. The measurement system integrity check in the procedures is sufficient to determine acceptability of passive devices.”  AIR 6236 was written to support the verification of proper operation of such devices in the EMI test facility using only test equipment commonly available in an EMI test facility. The idea behind the AIR was that if a measurement system integrity check was problematic, the AIR 6236 measurements would demonstrate whether or not there was a problem with a transducer.  AIR 6236 was published in December 2015.  Also, the procedures in the AIR can be used in-house to routinely self-check EMI test equipment, if desired.

This synopsis, by the AIR’s author, discusses what’s in it, and why, and includes a test procedure for one piece of equipment that was left out of the AIR.

The Introduction says that the AIR provides guidance on how to self-check the devices listed below, using equipment commonly found in EMI test facilities.  The purpose is not to calibrate these devices, but to check that they have not varied significantly from manufacturer’s specifications.

The Scope says that the AIR provides guidance to the EMI test facility on how to check performance of the following types of EMI test equipment:

Current probe

Line Impedance Stabilization Network (LISN)

Directional coupler

Attenuator

Cable loss

Low noise preamplifier

Rod antenna base

Passive antennas

Power-line ripple detector (CS101 transducer)*

*The last device is not described in the AIR, but should have been, an oversight on the author’s part. The power-line ripple detector is new in the MIL-STD-461G CS101 section.  The PRD allows the use of a spectrum analyzer or EMI receiver to monitor injected CS101 ripple, in lieu of an oscilloscope, which is very helpful when injecting ripple on an ac bus.

All the AIR 6236 performance checks can be performed without software.  A computer may be required to generate an electronic or hard copy of data.  This is not to say that custom software might not be helpful; just that the procedures as written intentionally eschew the necessity of automated operation.

The Purpose of AIR 6236 is not to reproduce the procedures used by an accredited calibration facility, but rather to provide simple and accurate methods available using only test equipment found in an EMI test facility.  For simplicity, all set-ups are shown using a network analyzer, but a spectrum analyzer or EMI receiver with built-in tracking generator may be used in lieu of a network analyzer, and if that isn’t available, a separate signal generator may replace the tracking generator.  The effects of these substitutions are discussed in the final section.

AIR 6236 measurement methods are not exclusive, but found to work well with a minimum of complexity.  This is why it is an AIR – aerospace information report – rather than an ARP – aerospace recommended practice.  There are many ways to skin the cats included in the AIR, and others may be judged better than those included, depending on the value system of the person holding judgment.  The standard of value in selecting the included measurements was that they could be performed by an EMI test facility with equipment they already own and which would have NIST-traceable calibrations.

MIL-STD-461 is listed as an “Applicable Document.”

The following Performance Checks form the main body of AIR 6236.

  1. Current Probe

Various models of current probes based on transformer action are used from frequencies as low as 1 Hz to at least 1 GHz.  All these probes may be calibrated as per Figure 1.

In Figure 1, the network analyzer source drives current through the calibration fixture, which the current probe senses. The attenuator values (excepting the 10 dB pad on the input side of the calibration fixture) are so chosen that the ratio of the current probe output (T-port) to the reference (R) input is directly the transfer impedance in dB Ohms, with no data reduction required.  They also perform impedance-matching functions reducing vswr-related errors at higher frequencies.  The 10 dB pad is solely for impedance matching and vswr-reducing, and need not be included if unnecessary, typically at audio frequencies where extra signal level into the calibration fixture is required.  Its value does not affect the transfer impedance calculation.

Image may be NSFW.
Clik here to view.
AIR_fig1

 

 

 

 

 

 
Figure 1: Current Probe Calibration – T/R ratio is the transfer impedance in dB Ohms.

At radio frequencies where there is plenty of dynamic range, the source setting should be set 10 dB below maximum in order to place 10 dB of impedance matching attenuation between the source and coaxial transmission line. Also at radio frequencies where loss in the coaxial cable becomes appreciable, the length and type of coaxial connection between current probe output and “T” port and between the 20 dB pads on the output of the calibration fixture and the “R” input must be the same.

  1. LISN

While there are several methods for measuring the LISN impedance specified in MIL-STD-461, none has the simplicity and ease of measuring the insertion loss the LISN presents to a 50 ohm signal source.  Insertion loss is the potential measured at the LISN port relative to at a 50 ohm load.  Above 1 MHz, where the 50 uH LISN approximates 50 Ohms, the insertion loss is 0 dB.  At lower frequencies, insertion loss increases with decreasing frequency.  Figure 2a shows the measurement set-up, and Figure 2b shows expected results, including error bars representing the MIL-STD-461 20% tolerance on LISN impedance.  This method and limit account for the 0.25 uF blocking capacitor loss.  Note that the upper tolerance above 1 MHz is strictly academic; there is no way the LISN impedance can be higher than 50 Ohms, so the insertion loss cannot exceed 0 dB.  At frequencies where coaxial cable loss is significant, the type and length of the cables connecting to the “T” and “R” ports must be the same.  The connection between splitter and LISN input power connector must be short enough to have no significant loss. Insertion loss is measured as the T/R ratio.

Image may be NSFW.
Clik here to view.
AIR_fig2a

 

 

 

 

 

 

 

 

Figure 2a: LISN insertion loss measurement set-up

Image may be NSFW.
Clik here to view.
AIR_fig2b1

Image may be NSFW.
Clik here to view.
AIR_fig2b2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 2b: MIL-STD-461 50 uH (upper curve )& 5 uH LISN insertion loss (lower curve), including losses in the 0.25 uF blocking capacitor with 50 uH curve

  1. Directional Coupler

The forward power port coupling factor is used in some MIL-STD-461 measurements.  This procedure measures that factor, as shown in Figure 3a.  The T/R ratio is the coupling port factor.  At frequencies where coaxial cable loss is significant, the type and length of the cables connecting to the “T” and “R” ports must be the same.

Image may be NSFW.
Clik here to view.
AIR_fig3a

 

 

 

 

 

 

 

Figure 3a: Directional coupler forward power coupling factor measurement

Because return loss can be used to verify antenna performance (see section 8), the following set-up and description explain how to characterize the reverse power port.  Figure 3b is similar to Figure 3a and measures the reverse power port coupling factor.  The T/R ratio is the reverse power coupling port factor.  At frequencies where coaxial cable loss is significant, the type and length of the cables connecting to the “T” and “R” ports must be the same.  Connection between splitter and directional coupler should be as short as possible, with negligible loss.

Figure 3c shows how to determine the limit on return loss measurement associated with a good match to 50 Ohms.  The return loss so measured represents a minimum vswr value that can be ascertained using this method.

Image may be NSFW.
Clik here to view.
AIR_fig3b

 

 

 

 

 

 

Figure 3b: Directional coupler reverse power coupling factor measurement

Image may be NSFW.
Clik here to view.
AIR_fig3c

 

 

 

 

 

 

 

Figure 3c: Measurement to determine the minimum vswr that can be determined using the return loss method.

  1. Resistive Attenuator

Attenuators are used in a variety of tests, both emissions and susceptibility.  This procedure measures attenuation, as shown in Figure 4.  The T/R ratio represents the attenuation. At frequencies where coaxial cable loss is significant, the type and length of the cables connecting to the “T” and “R” ports must be the same.  Connection between attenuator and splitter should be as short as possible, with negligible loss.

Image may be NSFW.
Clik here to view.
AIR_fig4

 

 

 

 

 

 

 

Figure 4: Attenuator measurement

 

  1. Cable Loss

Coaxial cables are used in all measurement set-ups.  This procedure measures cable attenuation, as shown in Figures 5a/b.  The T/R ratio represents the attenuation. The type and length of the cables connecting to the “T” and “R” ports must be the same, and for this measurement they must be measured to be the same, as in Figure 5a.  Once these cables have been shown to be the same, or their differences accounted for, they may be used to measure the loss of the cable-under-test, as in Figure 5b.  Because small losses are measured, vswr can be a perturbing factor.  Attenuation placed between the test and reference cable can minimize any impedance discontinuity effects.

Image may be NSFW.
Clik here to view.
AIR_fig5a

 

 

 

 

 

 

Figure 5a: Reference cable loss measurement

 

Image may be NSFW.
Clik here to view.
AIR_fig5b

 

 

 

 

 

 

Figure 5b: Cable loss measurement

  1. Low noise Preamplifier Gain

Low noise pre-amplifiers are often employed to make sensitive measurements such as radiated emissions, where the noise figure performance of the spectrum analyzer or EMI receiver is in itself not good enough to measure to the required limit.  This procedure measures the amplifier gain, which must be accounted for when reducing data measured using the preamplifier. Figure 6 shows the set-up. The T/R ratio represents the gain.  Care must be taken to use a very low input so the amplified output is well below the 1 dB compression point of the preamplifier.  This method can also be used to ascertain the 1 dB compression point, by repeatedly measuring the gain while increasing the input, until gain compression is realized.  At frequencies where coaxial cable loss is significant, the type and length of the cables connecting to the “T” and “R” ports must be the same. The connection between the splitter and preamplifier should be as short as possible with negligible loss.

Image may be NSFW.
Clik here to view.
AIR_fig6

 

 

 

 

 

 

 

Figure 6: Low noise preamplifier gain measurement

  1. 41” Rod Antenna Base Transducer Factor Measurement

The base of a 41” rod antenna, whether active or passive, acts as an impedance matching device between the capacitive output impedance of the rod, and the 50 Ohm connection into the spectrum analyzer or EMI receiver.  A capacitor simulating the rod output impedance must be used in series between the network analyzer 50 Ohm source output, and the point at which the rod antenna mates with the antenna base, as per MIL-STD-461F/G Figure RE102-8, and as depicted below in Figure 7.  The rod antenna factor is the measured transducer factor (gain or loss) less 6 dB, to account for the half-meter effective height of the 41” rod. The ratio T/R represents the gain or loss of the rod antenna base. Care must be taken in case of an active rod antenna to select a sufficiently low source signal level in order to avoid overload of the preamplifier in the rod antenna base.

Image may be NSFW.
Clik here to view.
AIR_fig7

 

 

 

 

 

 

 

 

 

Figure 7: 41” rod antenna base transducer factor measurement

  1. VSWR Check of Antenna Matching Network

The most accurate check of an antenna’s performance is its physical dimensions.  If the radiating elements have not suffered damage, and the matching network between the 50 Ohm coaxial input to the radiating elements is also intact, antenna performance will be as advertised.  While the radiating elements may be inspected visually, the matching network cannot, and its performance must be measured to ascertain integrity.  While a simple device such as the small loop used for MIL-STD-461 test RE101 may be measured with an ohmmeter to verify continuity, more complex antennas such as the biconical and double ridge guide horns cannot be so checked.  A check of their match to 50 Ohms in-band to their operating frequency band can verify that the matching network is not damaged. Such a check also checks any damage to coaxial connectors.

There are many ways to measure vswr, directly and indirectly.  The vswr shown in Figure 8 was specifically chosen to use only equipment found in an EMI test facility.

Image may be NSFW.
Clik here to view.
AIR_fig8

 

 

 

 

 

 

 

 

Figure 8: Antenna vswr measurement

Return loss is related to vswr as shown.

Return loss (dB) = -20 log [ (vswr-1)/(vswr + 1)]

Low vswr means a good match and return loss is high, meaning the measured T/R ratio will be low.  Conversely, a poor match results in high reverse power, and the T/R ratio will be higher.  In general, antennas have poor vswr characteristics near band edges, and best performance mid-band.  In particular, the 137 cm tip-to-tip biconical antenna below 80 MHz has such poor vswr characteristics / high return loss as to be nearly indistinguishable from a bad balun. Therefore, vswr should be measured mid-band, and compared to manufacturer’s specifications there.  Table 8 gives a range of vswr vs. return loss values useful in characterizing antenna matching networks.

VSWR Return Loss dB
1:1 -∞
1.22:1 -20
1.5:1 -14
2:1 -9.5
2.5:1 -7.4
3:1 -6
3.5:1 -3.5

 

Table 8: Vswr vs. return loss

Note that return loss at values in excess of -20 dB will be difficult to measure, and in general aren’t necessary, since they correspond to matched impedances very close to 50 Ohms, a condition not normally encountered with broadband antennas, where vswr of 2:1 to 3:1 is typical.

  1. Power-line Ripple Detector – not part of AIR 6236

The power-line ripple detector (PRD) acts as a resistive voltage divider and a transformer in order to allow a 50 ohm tunable voltmeter (spectrum analyzer or EMI receiver) to monitor audio frequency ripple superimposed on an ac or dc bus via the CS101 test method.  The transducer factor is the constant of proportionality between the ripple potential on the bus and what is measured at the 50 Ohm tunable device.  This test method uses a bnc-tee instead of a 50 ohm splitter because it is not a 50 ohm measurement, it is audio frequency, and it is critical that the reference reading be exactly what is applied to the PRD bus connection jacks.  The measurement is swept from 30 Hz to 150 kHz.  The PRD has two transducer factors; one is flat and represents voltage division and the transformer winding ratio, and the other rolls off above 5 kHz at the same rate but opposite slope to the MIL-STD-461 CS101 limit so that the 50 ohm tunable device measures a constant value even when the limit is decreasing with increasing frequency. This aids in making manual measurements, and also facilitates better signal-to-noise ratio as the limit gets lower at higher frequencies.

Image may be NSFW.
Clik here to view.
AIR_fig9

 

 

 

 

 

 

 

 

Figure 9: Power-line ripple detector transducer factor measurement

 

The last section is Measurement Options When a Network Analyzer Is Not Available

In lieu of a network analyzer, which is not ordinary EMI test equipment, a spectrum analyzer or EMI receiver with a built-in tracking generator may be used.  If that isn’t available, a spectrum analyzer/EMI receiver may be used along with a separate signal generator.

In each case, the rf input of the analyzer/receiver replaces the “T” (test) port on the network analyzer, while the tracking generator or signal generator replaces the “S” (source) port.  For those measurements involving 50 ohm devices, it is advantageous to us a 0 dBm signal source so that the lack of a reference measurement has no effect: the trace on the analyzer effectively is the “T/R” plot that would be obtained with a network analyzer.

An analyzer/receiver with the capability to display two traces may be used in the cases where the device-under-test loads the source and that must be taken into account.  A sub – 1 GHz splitter such as used with a network analyzer for this purpose may be obtained for petty cash. A microwave splitter is more expensive, but still relatively inexpensive as test equipment goes.

If a tracking generator is not available, and an external signal source is used, then two options are available. Absent any controlling software synchronizing the sweep (and thus effectively creating a tracking generator) the signal source and analyzer/receiver sweep are unsynchronized, which requires placing the analyzer/receiver in “max hold” display mode and performing multiple sweeps until the observed trace has no dropouts.  This requires more time than the other approaches, but requires no extra instrumentation, and no investment in computer control.

Some newer digital network analyzers are two port devices, requiring sequential measurements for reference and test measurements rather than traditional simultaneous measurements.  The measurement principle is the same.

The post Why Is There AIR (in MIL-STD-461G)? appeared first on Interference Technology.

MIL-STD-461G: The “Compleat” Review

Ken Javor, EMC Compliance January 2016

 

The deleted old,

The brand-spanking new.

That which was borrowed,

And that, eschewed.

MIL-STD-461G was released on 11 December 2015 and will become contractually obligatory on programs initiated after that date.

This account is more than a simple laundry list arrived at by performing a side-by-side “F” vs. “G” comparison.  Instead, it is an insider account into the issues with which the Tri-Service Working Group (TSWG) was grappling, and the thought processes behind the changes, as well as, of course, the changes themselves.  It also lists some of the issues brought to the table that were not incorporated in MIL-STD-461G, and why.

It will greatly assist the reader if a copy of MIL-STD-461G is available as this account unfolds.

As background, MIL-STD-461 is officially prepared by the US Air Force, but it is the product of a TSWG made up not surprisingly of representatives from the Army and Navy as well.  In addition to Service members there are industry representatives, of which the author is one.

Since 1993, MIL-STD-461 has been on a five-year review cycle, to ensure that it remains current and useful.  This does not mean a new revision has to be released every five years; just that a review must be performed on that cycle.  It would be entirely acceptable to simply reaffirm the old version with no changes.  To date, that hasn’t happened.

MIL-STD-461D and MIL-STD-462D released in 1993 remain the major “revolution” in military EMI standards, with evolutionary changes following. MIL-STD-461E combined MIL-STD-461 and MIL-STD-462 into a single standard, obsoleting MIL-STD-462 in 1999.  MIL-STD-461G makes the most structural changes since that time, adding two new requirements (lightning indirect effects, CS117, and personnel electrostatic discharge, CS118) while eliminating the CS106 requirement that was added the last time around in MIL-STD-461F.  So we have a net increase of one requirement.  There are also many other important changes, detailed herein.

One of the revolutionary aspects of MIL-STD-462D in 1993 was the inclusion of measurement system integrity checks that were performed prior to each emission measurement to ensure proper operation of the measurement system.  To the author’s knowledge, these checks have remained unique to MIL-STD-461 ever since.

The philosophy behind these checks gains its greatest expression in MIL-STD-461G.  The TSWG considers a real-time check of each set-up just prior to the actual measurement to be the best way to ensure an accurate measurement. To that end, several checks have been beefed up, but most importantly the regular calibration of transducers used in EMI testing has been de-emphasized.  Section 4.3.11 Calibration of measuring equipment has been reduced in scope to devices such as EMI receivers and spectrum analyzers, oscilloscopes and (RS103) electric field sensors.  The new text says, “After the initial calibration, passive devices such as measurement antennas, current probes, and LISNs, require no further formal calibration unless the device is repaired. The measurement system integrity check in the procedures is sufficient to determine acceptability of passive devices.”  A new SAE Aerospace Information Report, AIR 6236 has been written to support the verification of proper operation of such devices in the EMI test facility using only test equipment commonly available in an EMI test facility.  The idea is that if a measurement system integrity check shows a problem, the AIR 6236 measurements will demonstrate whether or not there is a problem with a transducer.  AIR 6236 is incorporated by reference only, and in the non-contractual appendix, at that.  It is not part of any measurement system integrity check.  Also the term “measurement system integrity check” globally replaces the inaccurate formerly used words, “calibration.”

Another theme beginning with MIL-STD-461D through “G” is balancing what is technically correct vs. what it is possible to get the average test facility to do correctly.  An example of this is the fixed distance for power wiring between test sample and LISNs.  Since 1993, it has been a minimum of two meters, and a maximum of 2.5 meters, for all tests.  Prior to 1993, under MIL-STD-462 back to 1967, the power wire length was one meter for CE/CS testing, and two meters for RE/RS testing.  The idea was that for CE testing there would be better accuracy with less vswr-induced error with a shorter cable, but a longer cable was necessary for RE02 and RS03. But the sense of the TSWG was that too few people were doing that, so they compromised on one length for all tests under MIL-STD-462D and ever since.  That is why CE102 only covers up to 10 MHz, instead of the previous CE03 running to 50 MHz.

Along these lines, MIL-STD-461G section 4.3.8.2 formalizes a requirement to check bond impedance between test sample and ground plane prior to EMI testing, and prior to cable-connection.  It is disconcerting that this needs to be stated after a half-century of MIL-STD-461.  Section 4.3.6 requires LISNs to be bonded to the ground plane with a resistance no greater than 2.5 milliohms.  Section 4.3.7.2 says that the only antenna that can be in the shield room during a radiated test is the antenna in actual use.  Translation: the shielded, anechoic-lined chamber is a test chamber, not a broom closet.  It is distressing to see a chamber outfitted with expensive absorber, often exceeding MIL-STD-461 absorber treatment requirements, while at the same time every antenna used for RE102 and RS103 except the one in use is littered around the periphery of the chamber.

Similarly, sections 4.3.8.6.1 and 4.3.8.6.2 that describe cable layout in the test chamber now stipulate that the 5 cm above ground standoff is to be achieved using “non-conductive material such as wood or foam.”  And that the entire length of the cable, not just the two meters exposed to the antenna, be so-supported above the ground plane.  Someone somewhere was using spare rf absorber to support cables…

A theme that began with MIL-STD-461F continues in “G”, and that is responding to abuses of the standard by practitioners of EMC “law” as opposed to EMC engineering.  Another way of saying this is that “lawyers” are misinterpreting the letter of the standard while ignoring the obvious intent.  The use of shielded power cables where it wasn’t justified resulted in a complete prohibition on the use of shielded power cables for EMI testing in MIL-STD-461F.  This was described in an article on the MIL-STD-461F revision that appeared in the January 2008 issue of Conformity magazine:

Prohibition of Use of Shielded Power Leads

The wording in section 4.3.8.6 (“Construction and arrangement of EUT cables”) is a little more definitive than in -461E, stating that shielded power conductors may not be used unless the platform on which the equipment is to be installed shields the power bus from point-of-origin to the load. There have been problems with equipment manufacturers asking for and receiving shielded power leads from the point-of-distribution (typically a breaker box) to the load, but with the power bus from the breaker box back to the generator being unshielded.

Of course the fundamental rule is that test wiring simulate the intended installation. With a partially shielded power bus, the equipment manufacturer can claim that he gets a shielded feed on the platform while the integrator sees an unshielded main bus. MIL-STD-461E 4.3.8.6 wording was not conclusive on this subject: “Electrical cable assemblies shall simulate actual installation and usage. Shielded cables or shielded leads (including power leads and wire grounds) within cables shall be used only if they have been specified in installation requirements.” This problem is alleviated in MIL-STD-461F, which states in plain language precisely the above quotation, but then adds, “Input (primary) power leads, returns, and wire grounds shall not be shielded.”

Similarly, the alternative field intensity pre-calibration technique using an antenna above 1 GHz that existed from MIL-STD-462D through MIL-STD-461F has now been removed, requiring real time leveling using an electrically short broadband electric field sensor over the entire test frequency range.  The original alternative two-antenna technique was a grandfather clause from 1993 when many EMI test facilities lacked an electric field sensor covering 1 – 18 GHz, which were new and expensive at the time.   There was and is nothing wrong with this technique, but EMC lawyers were twisting the meaning of the standard to say they could precalibrate the field in the absence of the test sample at all frequencies.  The “cure” for this abuse was to remove the grandfather clause, after an informal survey of USA EMI test facilities revealed that 100% of those polled had the equipment necessary to perform real-time leveling over all frequencies from 10 kHz to 18 GHz.

Another response to EMC lawyer abuse is very subtle, and is found in section 5.17.1 RE102 applicability.  In the “F” version, this sentence is found:

“… The requirement does not apply at the transmitter fundamental frequencies and the necessary occupied bandwidth of the signal.”

 

Find the difference in the “G” version:

 

“… this requirement does not apply at the transmitter fundamental frequency and the necessary occupied bandwidth of the signal.”

 

The difference is in the use of the plural “frequencies” in “F,” and the singular “frequency” in “G.”  Believe it or not, EMC lawyers were interpreting the plural to mean the requirement didn’t apply at any frequency to which the radio could be tuned, as opposed to the intent, which is that it doesn’t apply at the frequency to which the radio is tuned during the test.

Yet another theme, this one unique to MIL-STD-461G, is an added emphasis on the testing of large, floor standing test samples whose height approaches the horizontal extent of the test set-up.  In previous versions (“D” through “F”) there was plenty of information on how to set up RE102/RS103 antenna positions for test set-ups with extended horizontal dimensions, but no corresponding information for vertically large enclosures, such as 19” racks.  The RE102 and RS103 sections of this version of the standard now require a sufficient number of antenna positions such that the entire area of the test set-up has been interrogated/illuminated.

A combination of these two themes leads to a conundrum.  A comment against the draft for industry review correctly pointed out that a high gain antenna of the type often used at microwave frequencies won’t be able to illuminate a large enclosure such as a 19” rack and an electric field sensor placed per standard guidelines, because the illumination spot size can’t cover both the enclosure and a properly placed sensor with sufficient clearance from the enclosure to avoid undue influence from it.  This sort of situation calls for a precalibrated field, but that is no longer available.  Such cases will require tailoring with buy-in from the customer.

There is a global clarification to requirements CS114, CS115, and CS116.  The requirement to monitor cable current within 5 cm of the equipment front face is relaxed if the EMI backshell (or braid sock) extends beyond that distance.  In that case, the monitor probe should be placed as close as possible to the backshell end.  The 5 cm requirement is somewhat of an anachronism ever since the “E” revision, which reduced the maximum CS114 frequency from 400 MHz to 200 MHz.  The concept behind the 5 cm rule was to monitor the current that was flowing into the test sample.  This needs to be done within a tenth wavelength of the test sample, which is 7.5 cm at 400 MHz, but 15 cm at 200 MHz.  Note the spectrum of CS115 and CS116 is lower than that of CS114, so that probe placement instructions based on CS114 suffice for these latter two requirements.

Another global change to the measurement system integrity checks is to move specified test frequencies away from the very end of a requirement frequency range, and away from a bandwidth break point, in order that the data trace show the complete response, and not a truncated version thereof.

We’ll get something out of the way first even though it is out-of-order, because it is likely the most pressing concern for EMI test facilities.  The two new requirements CS117 and CS118 require no test equipment different from RTCA/DO-160 sections 22 and 25, with one exception.  CS118 requires a contact discharge “target” as per EN 61000-4-2.  If a test house has these test capabilities at present, they need buy no new test equipment.  A summary table of equipment new to MIL-STD-461G is presented at the article end.  It is presented at the end so that the reader can understand the context within which the new equipment is allowable and/or necessary.  This table is not an endorsement, just a cross-reference of requirements, equipment and vendors.

There was a DoD input to include not only indirect effects of lightning, but also direct effects, as well.  The TSWG rejected this on the basis that it doesn’t belong in MIL-STD-461.  Direct effects testing (RTCA/DO-160 section 23) doesn’t naturally map into MIL-STD-461, because the pass/fail criterion is usually not proper operation, but lack of damage, or containment of damage so it doesn’t propagate and cause an issue to other equipment/platform structure.  Thus it more naturally falls within the purview of MIL-STD-810.  It should be noted that RTCA/DO-160 “Environmental Conditions and Test Procedures for Aircraft” subsumes three different military standards: MIL-STD-810 for environmental qualification, MIL-STD-704 for electrical power quality, and MIL-STD-461 for EMI control.  Lightning indirect effects is close enough to MIL-STD-461 to be a comfortable fit there, but direct effects evaluation most assuredly is not.

An editorial change is that frequency ranges are no longer listed in the individual requirement titles, but rather moved to the applicability subsection, where they more naturally belong.  Many requirements have different start and stop frequencies depending on Service and application.

What follows is a list of what the author considers major changes of interest to the industry.

Section 1.2.2 tailoring of requirements now explicitly states that any tailoring must be approved by the procuring activity.  This was always the case, but wasn’t explicitly stated.

Most of the section 3 definitions have been tweaked.  In particular, the definition of “Below deck” (section 3.4 in “F”) has been expanded into two subsections in “G”:  3.1.3 Below deck, and 3.1.5 Exposed below deck.  Exposed below deck simply means not as much shielding as assumed for below deck, and equipment to be installed below deck gets the same RE102 limit as topside in Figure RE102-1, where the more stringent limit instead of being labeled “topside” as in “F,” is now labeled “above deck and exposed below deck.”

Supporting appendix material for section 4.2.2 Filtering (Navy only) adds extra rationale for the limits on line-to-ground capacitance.  It all makes sense, but it doesn’t have the urgency of the original explanation made to the author many moons (decades) ago.  The original explanation stated that ship power was ungrounded so that in the event of battle damage, one phase could short to structure and continue to operate without degradation.  Therefore it was necessary to limit line-to-ground capacitance to preserve a high impedance between phases in the event of such a short circuit.  To the author, that is a much more satisfying (read strong) argument in the event someone wishes to violate it than more nebulous concepts (to program management) such as hull currents, ground loops and leakage current.

Section 4.3.5.1 (metallic ground plane), augmented by brand new Figure 5 requires 2.5 meters in any direction from the edge of the test set-up boundary to the edge of the ground plane, as compared to 1.5 meters in earlier versions of the standard.  The change was based on the desire to have the ground plane underneath the entire set-up, antennas used in various tests, and distance beyond the backside of any such antenna still covered with ground plane.  Also note Figure 5 replaces what looked like a truck or other wheeled vehicle (but wasn’t supposed to) with something that looks like a test equipment rack.  It is important to always reinforce that MIL-STD-461 applies to equipments and subsystems, not vehicles/platforms.

Figures 2 – 5 have two subtle changes.  The first is that the test sample enclosures are oriented so that the connector side faces the way the cables are laid down the length of the tabletop, as opposed to in previous versions, where the connector side faces the front of the table.  Actually Figure 5 has side-facing connectors in both “F” and “G;” the difference in Figure 5 is that the test sample evokes an electronic equipment rack instead of a wheeled vehicle (which was never intended), and the cables are laid out 5 cm above a tabletop ground plane, not 5 cm above the floor, as in “F.”  The second change is that all these figures are now titled “general.”  Complex enclosures with lots of cables and/or long EMI backshells with large cable bend radii will follow the new setup, but paragraph 4.3.8.5 Orientation of EUTs is unchanged and still requires surfaces which produce maximum radiation to face the measurement antenna.  So nothing to fear here, EMC lawyers: there is still plenty of opportunity to ply your craft.

A theme in MIL-STD-461G is to expand instructions on how to set-up and test when the test sample has large vertical extent.  Previously, the instructions were based on avionics type equipment enclosures that mount on the tabletop ground plane. These could be large in horizontal extent and instructions have previously existed in how to lay this out and how to place antennas.  Sections 4.3.8.6.1 (interconnecting leads and cables) and 4.3.6.8.2 (input primary power leads) expand on the routing of cables when the test sample is a large floor standing unit.  Figures 4 and 5 also augment this topic.

Issues arise with proper antenna coverage of test samples with large vertical extent, and these are dealt with in RE102 and RS103 by requiring the entire surface area to be illuminated, not just the horizontal width.  But another issue is cable length.  There has always been a limit of 2.5 meters maximum between test sample and LISNs, in order to allow the LISN to control the line impedance (the reason why CE102 stops at 10 MHz).  But with a large test sample like a floor-standing rack, especially if the cables exit near the top and a power strip runs down the height of the rack powering loads near the bottom, the 2.5 meters gets used up very quickly and a strict adherence to that limit would mount the LISNs very near the rack itself, limiting RE/RS interaction with power lines.  Given the MIL-STD-462D decision to have a single power wire length for all tests, as opposed to short cables for CE testing and long cables for RE/RS as previously, it was decided to require two meters of power wiring exposed 5 cm above the tabletop ground plane regardless of where the wires emanate from the test sample, nor how long the cables are within the test sample.

Another theme in MIL-STD-461G is to expressly permit the use of certain types of test equipment that have appeared since the release of MIL-STD-461F.  Perhaps the most important of these is the “time-domain” or Fast Fourier Transform (FFT) EMI receiver.  Such receivers differ from the traditional in that instead of tuning to a particular frequency using the prescribed bandwidth and then stepping to the next frequency using a not-to-exceed half-bandwidth step, these receivers look at megahertz or tens of megahertz bands, and use FFT algorithms to recover the signals that would be measured using Table II prescribed bandwidths.  Such receivers are much faster than traditional receivers.  Section 4.3.10 (use of measurement equipment) expressly mentions and condones use of such receivers, and Table II is augmented to show dwell times required for time domain receivers.  The appendix for this section and Table II explains why the FFT-specific dwell times are necessary, and shows test data for a broadband signal with much better performance than obtainable with a traditional receiver or spectrum analyzer when Table II dwell times are used.  The appendix (pages 197 – 200) also shows what happens if Table II FFT-specific dwell times are not used, with the broadband signal completely missed.  The FFT receiver properly or improperly used is like the little girl in the nursery rhyme:

“There was a little girl,
Who had a little curl,
Right in the middle of her forehead.
When she was good,
She was very, very good,
But when she was bad she was horrid.”

The Table II modifications pertaining to FFT receivers are designed to make sure the little girl is always very, very good, and when she is bad, she is no worse than little girls used to be.

There are much greater advantages inherent in such receivers than simply getting a test done faster.  The operation of some devices (a linear actuator, for example) come to the end of their travel much faster than a traditional CE102 or RE102 sweep.  Or a helicopter rescue hoist cannot deploy as much line in a shield room as in flight, and thus cannot operate continuously through an emissions sweep.  The ability to capture multiple megahertz bands during a few seconds of operation can actually provide better quality data for such devices.  There are also devices designed with limited lifetimes, in which the ability to sweep faster may make testing possible that would have been impossible otherwise.

Section 4.3.10.4.2 (modulation of susceptibility signals) doesn’t say so, but now both CS114 and RS103 both require demonstration that the required modulation has been applied.  This is most easily done in zero-span mode and measuring the correct on-off timing and also the 40 dB on-to-off ratio.

Section 4.3.10.4.3 (thresholds of susceptibility) now requires “zeroing in” on the frequency of greatest susceptibility within the susceptibility band.

As mentioned earlier, Section 4.3.11 (calibration of measurement equipment) removes the need for routine calibration cycles on passive transducers.

Section 5.4.1 CE101 applicability adds a note explaining when the requirement is applicable to equipment installed on Navy aircraft.

Section 5.5.3.4.a.2 is the expansion on the basic CE102 measurement system integrity check that verifies the LISN impedance at 10.5 and 100 kHz.  The previous (“D” through “F”) technique verified the impedance at 2 and 10 MHz, but not at the lower frequencies, and with elimination of a requirement to regularly calibrate LISNs, the expanded measurement system integrity check fills that gap.  There is little extra effort besides record keeping.  Because the LISN is a low impedance relative to 50 Ohms, it is already the case that the signal source output amplitude must be increased above the actual level resulting across the LISN.  The extra effort is simply to document the required increase (in dB) and compare that to what is theoretically required per the LISN impedance curve of Figure 7, including both the 20% tolerance of that figure, plus the losses associated with the LISN 0.25 uF blocking capacitor.  This section says what the decibel difference is supposed to be at the measurement system integrity test frequencies of 10.5 and 100 kHz. SAE AIR 6236 shows the LISN insertion loss curve with tolerances over the entire 10 kHz to 10 MHz frequency range, and how to measure it.

Section 5.6.1 CE106 applicability has been modified by striking the following sentence from MIL-STD-461F:

“RE102 is applicable for emissions from antennas in receive and standby modes for equipment designed with antennas permanently mounted to the EUT.”

In the author’s opinion, this is a big loss, and not only for receive and standby modes, but also for low power transmitters such as Wi-Fi.  RE102 is much easier to perform than RE103, and where the device either transmitting or not can be shown to be in compliance with RE102 rather than RE103, that meets the overall intent of controlling interference.  Also, the -80 dBc type requirement makes no sense for a milliwatt transmitter; RE102 is the only applicable requirement at harmonics of a low-power transmitter.

Section 5.6.1 CE106 has been modified for NAVSEA (surface ship) transmitter procurements. The traditional 5% exclusion zone surrounding the transmit frequency is increased according to a formula given in this section for transmitters operating above 1 kW (60 dBm).

There is also a modification of the criterion for the highest required test frequency.  The effect of the change is that the test must always be run to at least 10 GHz, with a maximum frequency of 40 GHz.  The modification is that under MIL-STD-461F, the upper frequency was stated to be:

“The end frequency of the test is 40 GHz or twenty times the highest generated or received frequency within the EUT, whichever is less.”

Under the “G” change, the end frequency criterion depends on whether the highest generated or received frequency is above or below 1 GHz.  If the highest generated or received frequency is below 1 GHz, the end frequency is twenty times that frequency or 18 GHz, whichever is greater. If the highest generated or received frequency is equal to or above 1 GHz, then the end frequency is ten times the highest frequency, or 40 GHz, whichever is less.

To illustrate how this can affect results, consider two devices, one with a highest generated or received frequency of 999 MHz, and the other with a 1 GHz highest frequency. Under MIL-STD-461F, the end frequencies are practically identical, at or near 20 GHz.  Under MIL-STD-461G, the first device has a test stop frequency of 18 GHz, whereas the second device test stop frequency is only 10 GHz.

Of course the benefit of this approach is a lot of devices will only need to be tested to 18 GHz, instead of higher.  Every test facility can test to 18 GHz because of RE102, but often testing beyond that requires the rental of a special receiver, so overall this modification is beneficial.

Section 5.6.2 CE106 has been modified for NAVSEA (surface ship) transmitter procurements. The relative limit in decibels below the carrier (e.g., -80 dBc) has been changed to a fixed level of -40 dBm.  This was done to aid in co-location of high power transmitters and sensitive receivers.  Note that for any transmitter power level above 10 Watts (40 dBm) this represents a more stringent limit than previously.  There is a relaxation of this -40 dBm level to 0 dBm if the transmitter duty cycle is below 0.2%, which would take care of many radar systems.

Section 5.7.1 CS101 limits applicability to equipment drawing less than 30 Amps per phase, even though test equipment exists supporting testing to 100 Amps per phase.  The rationale behind this is that usually such high current loads operate off high potential buses, and the CS101 ripple levels are smaller than the distortion on these buses, and the total CS101 ripple power is infinitesimal compared to the actual load power, and susceptibilities just aren’t observed.  However, it should be noted that CS101 limits are based on MIL-STD-704, which doesn’t address bus potentials above 115 Vac or 270 Vdc.  The large loads to which this 30 Amp limitation would usually apply would be upwards of 400 Vac.  Note that the 6.3 Vrms ripple limit of Curve 1 is about 5% of a 115 Vac bus potential but only 1.5% of a 440 Vac bus.  If the CS101 limit for a 440 Vac bus were raised to that same 5% (22 Vrms) then (in the author’s opinion) it would be much more likely that issues would arise.

Section 5.7.3 CS101 test procedure allows for the use of a power line ripple detector (PRD) to measure ripple induced on an ac power line, which is very difficult to monitor.  The PRD functions as an interface between the power line and the 50 Ohm input of a spectrum analyzer or EMI receiver, allowing the measurement to be made in the frequency domain so that the ripple component can be seen entirely separately from the power line frequency.  This was described in an article entitled “Fifty Year-Old EMI Testing Problems Solved,” in the June 2012 issue of IN Compliance magazine.  The electronic archive shows video of the ripple on the peak of the ac power waveform vs. the separate injected ripple component. Stills are shown below.

Image may be NSFW.
Clik here to view.
Figure 1 800 Hz ripple on scope snapshot

 

 

 

 

 

 

 

 

 

 

 

Figure 1: 800 Hz ripple riding on a 400 Hz ac power bus, traditional CS101 measurement.

Image may be NSFW.
Clik here to view.
Figure 2 800 Hz ripp#5C7B17

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 2: 800 Hz ripple riding on a 400 Hz ac power bus, measured in the frequency domain. The PRD has a -66 dB transducer factor, so 66 dB has to be added to measured values to get to values on the power bus.

The PRD allows for monitoring and injecting ripple below the power frequency, a requirement prior to 1993 but the capability to do so was lost in 1993 when MIL-STD-462D prohibited use of the phase shift network method of eliminating the power frequency from the ripple measurement.  In MIL-STD-461D and onward, because of that prohibition, the limit for ac ripple started at the second harmonic of the power frequency, instead of at 30 Hz.  The PRD facilitates monitoring down to 30 Hz on any type of bus, as shown in Figure 3, but the TSWG was not interested in reviving the 30 Hz start frequency for ac buses after over twenty years of not having done so.

Image may be NSFW.
Clik here to view.
Figure 3 100 Hz ripp#5C7B19

 

 

 

 

 

 

 

 

 

 

 

 

Figure 3: Injection of 100 Hz ripple on a 400 cycle ac bus.

The PRD as commercialized by Pearson Electronics contains an isolation transformer so that connection of the ac neutral to the PRD maintains isolation between the neutral and the grounded EMI receiver or spectrum analyzer chassis.  That isolation is required by paragraph 5.7.3.1 of MIL-STD-461G.

CS101 figures are updated to show either the traditional measurement with floated oscilloscope or the new measurement with PRD and grounded receiver.

The CS101 supporting appendix material also includes this valuable information:

“Below 10 kHz there is a possibility that a portion of the injected signal will drop across the power source rather than the test sample power input.  Therefore, below 10 kHz when the specification limit potential cannot be developed across the test sample power input and the precalibrated power limit has been reached, it is incumbent on the tester to check that the missing signal level is not being dropped across the power source.  If the missing potential is there (usually due to high impedance test facility EMI filters), then steps should be taken to lower the source impedance.  This can be done on DC power by using a larger capacitor (~10,000 uF) in parallel with the 10 uF capacitor.  With AC power that isn’t possible and the best approach is to bypass facility EMI filters entirely, bringing unfiltered power into the room.”

The PRD facilitates that measurement by having two sets of jacks for simultaneously connecting to both test sample power input and across the power source and being able to read either of these values at the flip of a switch.

CS106 was added in MIL-STD-461F and is deleted in MIL-STD-461G.  The rationale for adding it was included in the MIL-STD-461F rationale appendix and is repeated here:

“The primary concern is to ensure that equipment performance is not degraded from voltage transients experienced on shipboard power systems coupling to interface wiring inside enclosures.

Electrical transients occur on all electrical distribution systems and can cause problems in circuitry which tend to be sensitive to voltage transients, such as latching circuits expecting a single trigger signal. On submarines and surface ships, these transients can be caused by switching of inductive loads, circuit breaker (or relay) bounce, and load feedback onto the power distribution system.

The 400 volt peak, 5 microsecond pulse defined in Figure CS106-1 is a suitable representation of the typical transient observed on Navy platforms. Measurements of transients on Navy platforms have shown the transient durations (widths) are predominantly in the 1 – 10 microsecond range. The large majority (> 90%) of the transients measured on both the 115 volt and 440 volt ac power distribution systems were between 50 and 500 volts peak.”

The underlying issue was not the response of the power supply to the transient, but crosstalk within an equipment between the transient on the power wiring and signals carried on wiring adjacent to the power wires without adequate protection.  The very purpose of the requirement was to force adequate segregation between power and signal circuitry.

However, CS115 was designed specifically to represent the coupling of transients on a power bus into cables run adjacent to it.  The very short 30 ns duration and even shorter 2 ns rise and fall times represent the leading edge of a waveform such as CS106 on a power bus inductively coupling into an adjacent cable.  Measurements on a one foot section of ribbon cable modeling an unprotected connection between a connector and motherboard revealed that injecting CS115 on the simulated signal wires looked very similar to the cross-coupling from injecting CS106 on the simulated power wires.

It was concluded that CS115 already meets the intent behind the reintroduction of CS106.

There are two changes to CS114. One affects the limit, the other is procedural.

The limit reverts back to that of MIL-STD-461D, where the primary limit is the forward power recorded in the calibration fixture when the appropriate specification limit (Curve 1 – 5) is induced in the fixture, with the only current limit being 6 dB higher than the current in the plateau region of the curve.  This is as opposed to the “E” and “F” versions, where the current limit is the actual current at the specific test frequency.  The reason behind the reversion to MIL-STD-461D is explained in “(More) On Field-To-Wire Coupling Versus Conducted Injection Techniques,” in the October 2014 issue of IN Compliance magazine.  This change will make it important to tailor the breakpoint frequency of the limit (nominally 1 MHz) for platform or actual cable dimension, in order to avoid over-testing.  In order to perform that tailoring, it is necessary to understand that the breakpoint represents the frequency at which a platform or cable is one-half wavelength long.  A 1 MHz break point is a physical length of 150 meters.  So if a platform is instead about 15 meters long, the breakpoint would shift to 10 MHz.

The procedural change is that in addition to the traditional measurement of the forward power required to induce the specification limit current in the calibration fixture, the current in the fixture must be measured using the current probe that will be used to monitor current on the cable-under-test.  This is an augmentation of the measurement system integrity check, because again a current probe will not require periodic calibration.

CS117 (lightning induced transients on cables and power leads) is one of the two new requirements in MIL-STD-461G.  It was borrowed from RTCA/DO-160 section 22, and it is subset of RTCA/DO-160 section 22. There is nothing in CS117 that doesn’t exist in section 22, but many aspects of section 22 were left out of CS117.  There was a desire to simplify, but the simplification was not performed for its own sake, but rather in keeping with two philosophical tenets of MIL-STD-461 since the “D” revisions in 1993.  These are that cable-related tests are performed at the bulk cable level, no pin injection, and second that platform installations are divided into two categories, internal and external (relative to a metallic platform).

MIL-STD-461B/C had requirements EMP-like damped sine injection requirements CS10/11/12/13 two of which injected on the entire bundle, and two of which were injected at the pin level.  These were all subsumed into bulk cable injection (BCI) requirement CS116 in 1993.  Likewise CS114 and CS115 began as BCI requirements and have stayed that way.  CS117 is adopted as a BCI requirement only, eschewing the pin injection requirements in section 22.  This greatly simplifies the test campaign on the types of equipment to which CS117 applies, such as flight and engine controls that have multiple cables with lots of pins.  Pin injection is important with shielded cables where the installed length is greater than the ten meters required in MIL-STD-461.  For this small subset of cables, some thought will need to be given to possibly boosting the injected current to make up for the lower shield transfer impedance of the set-up vs. installation.

CS117 has six waveforms borrowed from section 22, but only two levels, internal and external. In addition to that simplification vs. five different levels in RTCA/DO160 G section 22, another simplification is that there is no separate table for a single stroke application.  Instead, the single stroke levels of section 22 Table 22-3 have been incorporated into the multiple stroke Table VII of CS117.  Table 22-3 levels 3 and 4 become the first stroke of the multiple stroke requirement in CS117 Table VII. Level 3 maps to internal, and level 4 maps to external.  Subsequent strokes in CS117 Table VII are from section 22 Table 22-4, except that for Waveforms 4/5A, there was some mixing and matching from levels under Waveform 4/1 in section 22 Table 22-4.

Multiple bursts in the same CS117 Table VII are exactly the same as section 22 Table 22-5 levels 3 & 4, again mapping to internal and external installations, respectively.

One other wrinkle is that RTCA/DO-160 uses the 5 uH LISN, vs. the MIL-STD-461 default to 50 uH.  This means that the same waveform applied in a CS117 set-up will apply less potential to the load than if the test were performed to section 22, because the power source impedance is higher with CS117.  This was considered by the TSWG and accepted as part of maintaining consistency with the default 50 uH LISN used throughout the standard.

CS118 (personnel borne electrostatic discharge) is the second new requirement in MIL-STD-461G.  Before getting into requirement and test details, some background is in order.  In the run-up to the MIL-STD-461G revision process, proponents of including an ESD requirement discussed failures in the field and how those could be tied to ESD problems.  Such damage would most likely occur during remove-and-replace operations, not during powered up use, else the failures would be much more dramatic and noticeable (i.e., hardware working during a mission and suddenly failing, as opposed to installing hardware and running a built-in test – BIT – and with a BIT failure, installing a different box).  The application of ESD pulses to an unpowered box and then subsequently running BIT or some other acceptance test procedure (ATP) was argued to not fit within MIL-STD-461, just like lightning direct effects doesn’t, but rather to belong in MIL-STD-810.  But this argument didn’t fly, not least because the candidate test methods were based on RTCA/DO-160 section 25 and IEC 61000-4-2, which apply ESD pulses to fully operational hardware and look for malfunction.

The test set-up and “gun” are based largely on RTCA/DO-160 Section 25, with the addition of a “target” borrowed from IEC 61000-4-2 for calibrating the current discharge waveform, and a contact discharge electrode design not found in RTCA/DO-160 because it only requires air discharge.  The section 25 set-up was chosen over IEC 61000-4-2 because of the obvious similarities in a metal vehicle application, with the test sample enclosure directly grounded to structure, as opposed to the 61000-4-2 approach with a nonconductive table top 80 cm removed from ground, with at most a green wire ground connection.  The use of the 61000-4-2 target prior to each test is part of the measurement system integrity check philosophy, rather than relying solely on a “gun” calibration sticker.   Likewise CS118 requires an electrostatic assessment of the gun potential prior to the discharge. Contrast these two measurements with RTCA/DO-160G section 25.5.2: “…The ESD generator shall be calibrated to produce a positive and negative 15,000 volt (+10%, -0%) peak output pulse. The generator setting required to produce this output shall be recorded.”

Applicability is limited to non-ordnance connected electronics; ordnance response to ESD is covered elsewhere, but not in MIL-STD-461G.  Limits are 8 kV for contact, 15 kV for air discharge.  Contact discharge is the preferred method unless the test item has nonconductive surfaces requiring an air discharge approach.  Air discharges are performed not only at the 15 kV limit, as per RTCA/DO-160 section 25, but also at 2, 4, and 8 kV.  This is because air discharge current waveforms can have higher amplitudes at lower potentials, due to smaller arc distances and hence lower arc resistance.  It is most often the coupling from the radiated field of the ESD event that causes upset, and the higher the waveform di/dt, the large the transient coupled to (potential) victim circuits.

Section 5.18.1 RE102 applicability removes the conditional limit on the upper test frequency and makes it 18 GHz, regardless of test sample clock speeds.  It was deemed that the time saved not testing to 18 GHz was insignificant.

The most notable RE102 changes relate to illuminating/interrogating the entire test set-up area, as opposed to width, as already noted.  A change in the RE102 measurement system integrity check for the 41” rod antenna acknowledges that the assumed Thevenin model output impedance of a 41” rod is not always 10 pF, because some large diameter rods have larger output capacitance.  The standard now invokes the manufacturer’s suggested value.  But there is another much more subtle change, and it is important in the same way that the tip of an iceberg is important to a ship at sea.

Image may be NSFW.
Clik here to view.
Figure 4-iceberg

 

 

 

 

 

 

 

 

 

 

Figure 4: Some say this is a photo of the iceberg that sank the Titanic.

MIL-STD-461F introduced a change in how the rod antenna is configured.  The purpose of that change was to detune an observed resonance that occurs between 20 – 30 MHz.  Part of the change included clamping a ferrite sleeve around the coaxial transmission line between rod antenna base and EMI receiver.  MIL-STD-461 cannot specify a manufacturer or part number, but the previously referenced MIL-STD-461F update article identified one candidate as a Fair-Rite Part Number 431176451. The salient feature of that bead as shown in Figure 5 is that its impedance is mainly resistive/absorptive in the 20 – 30 MHz frequency range of interest, as is appropriate for detuning a resonance.  But that information never made it into the standard; the only description other than the actual impedance range cited in Figure RE102-6 was in the MIL-STD-461F RE102 appendix stating that, “Floating the counterpoise with the coaxial cable electrically bonded at the floor with a weak ferrite sleeve (lossy with minimum inductance) on the cable produced the best overall results.”  That description was routinely ignored by many test engineers, which resulted in said engineers criticizing the MIL-STD-461F technique as flawed.  Of course, attempting to detune a resonance by adding a largely reactive component isn’t going to help matters any, only shift the resonance downwards in frequency.  MIL-STD-461G moves that impedance description to the main body section 5.18.3.3.c(1): “…A ferrite sleeve with 20 to 30 ohms impedance (lossy with minimal inductance) at 20 MHz shall be placed near the center of the coaxial cable length between the antenna matching network and the floor.”

Image may be NSFW.
Clik here to view.
Figure 5 Characteristics of MIL-STD-461F rod detuning rf sleeve

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 5: Characteristics of MIL-STD-461F rod detuning rf sleeve (from Fair-Rite catalog)

 

 

But this subtle change of moving a recommendation from the appendix to the main body is just the tip of the rod antenna configuration iceberg. Much work remains to be done which will have to wait for MIL-STD-461H.  This work is now described.

 

An article published in the 2011 ITEM entitled, “On the Nature and Use of the 1.04 m Electric Field Probe,” explained in its conclusion that the only way to make an accurate field intensity measurement with a rod antenna was to either use the floor for a ground plane, or if the counterpoise was elevated above ground, then it must be totally floated above ground. The recommended technique was the insertion of an isolation transformer in the coaxial cable connection between the rod antenna base and the EMI receiver.  Another separate suggestion from another researcher recommended a fiber optic link.  Both these suggestions were evaluated during the MIL-STD-461G revision process, but both came up short for reasons described presently.  Also, a test equipment vendor introduced a rod antenna that was inherently floated using a fiber-optic link to a laptop computer controller.  Unfortunately, they were unable to make one available to the TSWG for evaluation during the MIL-STD-461G revision process.

 

Inserting a fiber optic link in the connection to a conventional rod antenna failed due to what appeared to be parasitic capacity between the green wire ground in the laboratory power and the bias potentials fed to the opto-electronic converters.  The plan was to replace the power supply with batteries to see if that eliminated the problem, but time ran out.  The problem with isolation transformers is there is always some degree of inter-winding capacitance between winding banks, and at these frequencies it cannot be ignored.  While the original problem dealt with by MIL-STD-461F was a parallel L-C trap with capacitance between the counterpoise and floor and the inductance supplied by the coaxial shield connection, when an isolation transformer is inserted a new series L-C trap is formed from the inter-winding capacitance and the coaxial shield inductance.  The combination of capacitance and inductance have to be limited such that the resultant resonance (which cannot be eliminated, only moved around) is above 30 MHz.  Given that different models of transformers have different and unspecified inter-winding capacitance, it would have to be measured by the test facility and then a maximum length cable would need to be specified to work with it to keep the resonance above 30 MHz.  This is difficult to write into a standard.  We hope that all this will be ironed out in time for routine incorporation into MIL-STD-461H. Stay tuned for progress updates in the form of articles on the subject either in future editions of ITEM or IN Compliance magazine.

Another RE102 change that was slated to happen but didn’t was wording that would allow the use of the new ETS/Lindgren Model 3117 antenna to be used above 1 GHz in addition to the original double ridge guide horn as presently specified in MIL-STD-461 via its physical aperture of 24.2 by 13.6 cm opening.  As can be seen from Figure 6 showing both antennas side-by-side, the newer antenna doesn’t have any sides as does the more traditional looking horn, and therefore specifying it via its physical aperture would be quite ambiguous.  MIL-STD-461 cannot specify test equipment by manufacturer and model, so a generic description that nevertheless conveys the desired characteristics is required.  We didn’t get a satisfactory description from the manufacturer, and discussed including salient performance characteristics instead such as beamwidth, which was where the new antenna was much better than the old one.  But in the end it was decided that would be too complicated because we would have physical apertures for all other antennas, but performance characteristics of the new one, and no one wanted to change to performance characteristics for all antennas.

Image may be NSFW.
Clik here to view.
Figure 6-3115 original comes first in layout or on left
Image may be NSFW.
Clik here to view.
Figure 6-3117 comes second inlayout or to the right

 

 

 

 

 

 

 

 

 

Figure 6: Traditional microwave DRG horn as specified in MIL-STD-461E/F and newer version not specified in MIL-STD-461

 

And finally, there was quite a bit of interest in adding a reverberation chamber alternative test procedure to RE102, much as for RS103, which was added in MIL-STD-461E.  There are several advantages to a reverb RE test method, and none of the drawbacks of RS reverb, namely the schedule hit.

Reverb RE testing captures all test sample emissions, rather than those emanating from the front face.  A reverb technique removes test chamber resonance issues due to the partial absorber liming coverage allowed by MIL-STD-461.  The test chamber is much less expensive.  There is the potential for making more sensitive measurements than in an absorber-lined chamber because we are capturing constructive interference of all the emanations at once.  The degree of improvement is based on the room “Q,” offset by the difference in gain between the traditionally required antennas and the biconicals that would be necessary.  Reverb purists who believe antenna gain doesn’t factor into a reverb measurement hang on until you have read the next paragraph, which outlines a reverb technique for making near field measurements.

RE reverberation techniques exist, such as in RTCA/DO-160 section 21, but these all work on an assumption that the collected power is available to radiate from a dipole antenna using a far field equation to analytically determine the field strength limit.  It was felt that this might not be the optimal approach, and an investigation based on the work of Norm Wehling, retired chief engineer at Elite Electronic Engineering Company as published in the 1993 issue of ITEM is underway.[i]  Although that effort was aimed at RS testing, the author realized it was eminently better suited for RE testing.  The basic idea is to use biconical antennas all the way from 30 – 1000 MHz and position them close to the normal placement for RE102 measurements, but put a paddle behind the antenna.  In an unlined chamber and the paddle stopped, this would be equivalent to the MIL-STD-462 test method prior to 1993, where unlined test chambers were the norm, and any RE measurement was in fact a mode-tuned measurement, except a single mode. The paddle allows multiple modes, and the spectrum analyzer/EMI receiver performs multiple fast sweeps in max hold mode during a single revolution of the paddle, which sweeps continuously at 6 – 7 rpm.  This means that a single frequency domain sweep over in milliseconds represents a single mode because the paddle is nearly motionless in that time period.  If an unlined chamber were the basis of RE measurements, as they were prior to 1993, there would be nothing to add to the method, because basically the paddle just captured the peaks of the constructive interferences instead of recording peaks and valleys (destructive interference), as in Figures 7 from Wehling.  But since the last twenty years have used an absorber-lined chamber, it is now necessary to back out the “boost” factor of the unlined chamber, which is evaluated by performing an ARP-958 antenna calibration in the stirred chamber and comparing the measured antenna factor to the normal calibration.  The difference is the “Q” of the room, and that must be backed out of the measured field intensity in the chamber in order to make the reverb measurement no more stringent than that in a lined chamber. At least, that is the author’s theory and plan.

Image may be NSFW.
Clik here to view.
Figure 7_Mil

 

 

 

 

 

 

 

 

 

Figure 7: Field uniformity without and with stirring in a typical MIL-STD-461-sized test chamber from 30 – 200 MHz, from Wehling.

The author’s investigation was nowhere near complete during the “G” revision process, but might bear fruit for the next revision cycle.

Section 5.19 RE103 has the same sort of changes in it as already described for CE106.

Section 5.20.1 RS101 applicability adds a note explaining when the requirement is applicable to equipment installed on Navy aircraft.  “For Navy aircraft, this requirement is applicable only to equipment installed on ASW capable aircraft, and external equipment on aircraft that are capable of being launched by electromagnetic launch systems.”  The italicized clause is new in “G.”

In addition to the RS103 changes already cited, there is a subtle change in the applicability of the requirement at the tuned frequency of a radio receiver.  A little historical background.

MIL-STD-461D and previous versions of MIL-STD-461 did not require RS103 testing at the tuned frequency of a radio receiver.  The reason for this is that the radio electronics are less exposed to the external electromagnetic environment (EME) than the antenna, and the radio receiver is tested with antenna port dummy loaded, so that it was clear that the antenna would conduct much more signal into the electronics than through the platform and through the radio enclosure.  During the revision process culminating in “E”, a case of two radios mounted side-by-side interfering with each other was brought forth.  One radio was tuned to the local oscillator (LO) of the other radio, and the LO leaked enough to couple into the victim radio.  This case resulted in a change where the RS103 requirement at the tuned frequency of a radio was the appropriate RE102 limit relaxed by 20 dB.  The limit basis was that the culprit would meet RE102, but the intensity a few centimeters away would be higher than the limit at one meter.  Under MIL-STD-461F, this interaction was de-emphasized, but NAVSEA (surface ships) had a concern for radio receivers mounted below decks far from their topside antennas but exposed to wireless networks and adjacent used handheld radio transmitters.  So there was no exception whatsoever at the tuned frequency of a radio for this Service and application.  MIL-STD-461G builds on this with further explanation (from the appendix):

“Revision G of this standard has further changed the applicability of RS103 for tuned receivers. The exemption at the tuned frequency to meet RS103 is in place for Air Force and Army equipment. For Navy equipment, RS103 is applicable at the tuned frequency unless the antenna is permanently attached to the equipment being tested. The reason for this is that on Navy installations, the antenna may be situated a far distance from the receiver, so these services want the test to apply to a receiver. Since the exemption at the tuned frequency is installation dependent, it may be extended to other systems as tailoring to this standard with procuring activity approval. For equipment where the antenna is permanently attached to the equipment, such as portable equipment or WiFi transmitters, the expectation is that there will be interference at the tuned frequency that is a “front door” event. In those cases, the requirement is that the antenna/receiver work after application of the E-field. Therefore, during the test, responses when RS103 is at the tuned frequency are allowed.”

MIL-STD-461G RS103 Section 5.21.3.3.d. Placement of electric field sensors has slightly different wording than MIL-STD-461F RS103 section 5.20.3.3.d.1 on the same subject, but the change is only to make position information clearer; there is no change to the positioning requirement.

Section 5.22.1 RS105 applicability adds a note explaining when the requirement is applicable to equipment installed on surface ships.  And the oscilloscope single-event bandwidth is updated to 700 MHz from the previous 500 MHz, even though the limit itself is unchanged.

Table of New Equipment Allowed/Required in MIL-STD-461G

Image may be NSFW.
Clik here to view.
Chart_MIL-STD-461G

* Specified as acceptable for use, but not required.  [i] Wehling, Norman. Repeatable Low-cost Radiated Susceptibility Test in a Standard Shielded Enclosure. ITEM 1993, p16ff.

The post MIL-STD-461G: The “Compleat” Review appeared first on Interference Technology.

Multilayer SMD Ferrites Optimized for Peak Current Loads

By Markus Holzbrecher

1.   Background

1.1.     Chip bead ferrites

Chip bead ferrites are inductive surface mount devices (SMDs) used for filtering undesired high frequency signal distortions in printed board assemblies. They are manufactured using a multilayer screen printing process. Optimised for as high as possible losses, these components consist of a nickel-zinc-ferrite body with a very fine embedded silver coil with a thickness of just a few micrometres. This structure makes the conventional SMD ferrite beads vulnerable to current spikes above their maximum rated load, resulting in degenerative or even immediate destruction of the component in some cases.

1.2.     Application

A typical chip bead ferrite application is shown in Figure 1. The multilayer ferrite is used as a longitudinal filter near the input of a circuit. Due to the low charging resistance of the capacitor, a very high pulse current flows for a short time at switch-on. This pulse current temporarily loads the SMD ferrite with a current that can reach many times the component’s maximum rated level.

In this example, a multilayer ferrite designated as Multilayer Power Suppression Bead (MPSB) has a rated impedance of 600 Ω for a maximum permissible current load of 2.1 A. The current surge in this constellation reaches a peak value of approx. 19 A and has a pulse length of 0.8 ms before declining to the circuit’s rated current.

Image may be NSFW.
Clik here to view.
01a peak current
Image may be NSFW.
Clik here to view.
01b Application Schematic

 

 

 

 

 

 

 

Figure 1: Application showing current peak at switch-on (5 A/DIV, 100 µs/DIV)

 

In general, a SMD ferrite’s maximum rated current also defines the component’s maximum current amplitude for any given temporary load. However, multilayer ferrites are now available that cater for current surges above their maximum continuous current as rated in their data sheets. Examples of these new components are examined in more detail below.

1.3.     Testing method

Current peaks occur frequently in real life applications, for example at switch-on of switch mode power supplies and electric motors. Windscreen wiper motors in vehicles are a well known example for recurrent current pulses. But discharge lamp ballasts can also produce high current peaks when the light is switched on. The input capacitor in a switch mode power supply can produce a particular high current peak, which the upstream EMC filter needs to withstand. In this context, pulses are understood as temporary current peaks above the circuit’s rated DC current level limited to a time span of less than 8 ms.

In search of a common standard for measuring the pulse load capacity of SMD ferrites, an appropriate approach was found in the definition of the melting integral for fuses. According to this standard, a pulse of 8 ms duration is applied to the fuse to “give the current time” to heat the fuse to determine its I²t value.  If the fuse withstands a given current pulse, the current is increased and this is repeated until the fuse fails. In this process, pauses of 10 seconds are inserted between pulses to give the component the necessary time to regenerate (cool down).

Würth Elektronik eiSos has developed an adapted test routine for multilayer ferrites based on this fuse testing standard. Using the same 8 ms pulse length, current pulses with increasing strength are applied to the multilayer ferrite up to their destruction. The components are subjected to incrementally increasing pulse currents starting from 1 A.

A rectangle as shown in Figure 2 was selected as pulse shape for all tests as this loads the component with the highest possible energy for a given pulse length although this will only very rarely be applicable in real-life situations.

Image may be NSFW.
Clik here to view.
02 possible pulse shapes at switch-on

Figure 2: Possible switch-on pulse shapes

 

 

2.   Pulse Load Capacity Analysis

Other than with fuses, multilayer SMD ferrites do not to lend themselves to specifying a generally applicable formula allowing conclusions to be drawn for various current peak values and different pulse lengths by calculating the melt integral. Rather, data sheet values are determined empirically and rely on extended test series with varying parameters.

The following example serves to confirm that the melt integral is unsuitable the multilayer ferrites (using the Würth WE-MPSB ferrite 742 792 206 01 with Z = 600 Ω, IR = 2.1 A, and RDC,typ = 43 mΩ). This component has a maximum peak current load capability of 18 A at a pulse length of 8 ms, which produces an I²t value of 2.592 A²ms (18 A @ 8 ms (5 sec pause, 24 °C)   I²t = 2.592 A²s).

The following result is obtained when calculating the current for a pulse length of 2 ms based on the I²t value for 8 ms:

Image may be NSFW.
Clik here to view.
equation1

 

 

 

 

However, the data sheet value is specified as max. 24 A as shown in Figure 3. The calculated I²t value differs significantly from the measured value. This shows that it is not appropriate to apply the known calculation method for the melt integral I²t to a multilayer ferrite.

Figure 3: Specified peak current load capability

Image may be NSFW.
Clik here to view.
03 Specified peak current load capability

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2.1.     Optimising the multilayer structure

Due to their silver layers with thicknesses of only 8 to 20 µm, multilayer ferrites are not inherently designed for high pulse currents. Würth has developed a new design with a combination of high current tolerance, up to 75% smaller RDC and as high as possible impedance over their complete frequency range. Depending on the desired impedance and peak current level, the design is varied for each individual component type.

2.2.     Current pulse tolerance relationships

Figure 4 shows the new ferrite bead’s current pulse tolerance behaviour in more detail using the  742 792 206 01 bead type as an example. The current vs. pulse length curve on the left side shows the maximum permitted peak current for pulse lengths ranging from 0.5 ms to 8 ms. Each ferrite bead type has an individual curve of this kind and these curves are only applicable for singular current pulses. The right graph

Image may be NSFW.
Clik here to view.
04 current vs Pulse length vs number of pulses

Figure 4: Permissible peak current by pulse length (left) and pulse number (right)

 

shows the maximum permitted pulse current for repeated current pulses. A maximum pulse length of 8 ms was selected to determine these values.

2.3.     Influencing factors

The factors influencing the ferrite beads’ behavior are:

  • The pulse length, with standard test values ranging from 0.5 ms to 8 ms. The longer the pulse, the lower the maximum pulse load capability.
  • The number of pulses, which was varied from 10 to 100,000 pulses in the tests (see Figure 4). The maximum permissible current pulse load drops as the pulse frequency increases.
  • The temperature should be noted as third reducing influencing factor: As the temperature rises, RDC increases, which results in a further reduction of the maximum permissible current pulse load.

Each of these interlinked factors is affected by the dependency on the pause length between individual pulses. In order to carry out an analysis of the linked system with a smaller pause time, all measurements need to be repeated while varying the influencing factors, temperature [T], pulse repetitions [n] and pulse length [t].

2.4.     New and previous ferrite bead series comparison

When developing their new ferrite bead series, Würth followed the objective to achieve impedance levels comparable with their previous series with additional tolerance to pulse current loads. Using the example of the 600 Ω models in size 0805 as shown in Figure 5, the new series is shown to exhibit almost the same impedance along with a higher rated current pulse tolerance due to its lower resistance.

Image may be NSFW.
Clik here to view.
05 Comparison of the impedance and rated current load capability

Figure 5: Impedance and rated current load capability of the 600 Ω WE-CBF and WE-MPSB bead ferrites

The new ferrite beads exhibit a significantly higher pulse load capability than the equivalent previous types. Figure 6 shows the maximum pulse level of the older 600 Ω type on the left and the maximum pulse level of the comparable newer 600 Ω model on the right. Moreover, Würth is now able to specify the pulse load capability of SMD ferrites manufactured in the multilayer screen printing process.

Image may be NSFW.
Clik here to view.
06 comp. pulse load capabiliy CBF-MPSB

Figure 6: Comparison of the different pulse load capability of the WE-CBF and WE-MPSB series bead ferrites

3.   Conclusion

Specific chip bead ferrite components can now cater to the requirements of circuits that load multilayer ferrites with temporary peak currents exceeding their rated maximum continuous current. The components’ multilayer structures are optimised to enable a higher current load capability by lowering the structure’s inherent resistance. The maximum pulse load capabilities of the new multilayer ferrites were determined using empirical measurements as calculations using a formula for the behaviour of fuses proved inappropriate.

 

 

Author’s biography:

Markus Holzbrecher, born 1983, graduated from Leipzig University of Applied Sciences with a diploma degree in Electrical Engineering. Since 2011, he has been responsible for the product area of EMC components for PCB assembly at Würth Elektronik eiSos.

 

 


 

 

 

The post Multilayer SMD Ferrites Optimized for Peak Current Loads appeared first on Interference Technology.

What’s new: IEC 61000-4-5 Second Edition vs. Third Edition

by Jeff Gray, Chief Technology Officer, Compliance West USA

Introduction
IEC 61000-4-5 is part of the IEC 61000 series, which describes surge immunity testing caused by over-voltages from switching and lightning transients. The second edition of IEC 61000-4-5 was released in 2005 and has been in use for many years. The third edition was released as an EN standard in 2014. The general philosophy of the third edition is unchanged from the second edition. However there have been a number of refinements to the standard: additional explanation to clear up ambiguities, new descriptions that were not included in the second edition, and new (informative) Annexes that can be used to help in the application of the standard. The purpose of this article is to outline the changes and additions that are now part of IEC 61000-4-5 3rd edition.

Critical Transition Dates
Transition from the second edition to the third edition is already taking place within the EU according to the following dates:
19 Mar. 2015 – Date of Publication (dop): The third edition has to be implemented by publication of an identical national standard by CENELEC member countries.
19 June 2017 – Date of Withdrawal (dow): National standards that conflict with the third edition must be withdrawn (i.e. the second edition can no longer be used).

Wave Shape Changes
One simple, seemingly benign addition to the third edition was to add a definition for “duration”: actually three definitions because one voltage waveform duration and two current waveform durations have been defined. This changes how the time of the waveforms are measured, and may have a significant impact on the equipment used to perform some tests. The change most greatly impacts the 8x20uS short-circuit current waveform. Figures 1 and 2 compare the measurement from the 2nd and 3rd editions of the standard. Compare T2 in the second edition to Tw and Td in the third edition.

Image may be NSFW.
Clik here to view.
Fig.1-Gray

 

 

 

 

 

 

 

 

 

 

 

 

Fig. 1: waveform definition in 2nd edition (T2)

 

Image may be NSFW.
Clik here to view.
Fig.2-Gray

 

 

 

 

 

 

 

 

 

 

 

 

Fig 2: waveform definition in 3rd edition (Td)

 

Another important change to the impulse waveform is that the 1.2×50/8×20 μs wave shape must be within the limits of the standard when the impulse is applied through a Coupling-Decoupling Network (CDN); specifically the 18μF coupling capacitor. This requirement was ambiguous in the second edition: Figure 3 of the second edition shows an 8×20 μs current waveform with no CDN connected, and Table 7 in a following section describes an 8×20 waveform at the EUT port of the CDN (through the 18μF coupling capacitor). Clearly it is not possible to generate the same impulse waveform with and without the 18μF coupling capacitor in the same generator/CDN system. While the open-circuit voltage waveform is not affected, the 8×20 μs short-circuit current wave shape will be significantly distorted by the addition of the 18μF capacitor, and the peak output current will be reduced by approx. 10% (depending on the design of the impulse generator). Figure 3 illustrates the problem: the normalized short-circuit output current of a nominal impulse generator is plotted against the same generator output into a 18μF capacitor. With the addition of the 18μF capacitor the peak current is significantly lower, and the waveform duration is shorter.

Image may be NSFW.
Clik here to view.
Fig.3-Gray

 

 

 

 

 

 

 

 

 

 

 

 

Figure 3: Normalized Short-circuit Output Current

 

Table 3 in the second edition seems to imply that the impulse parameters are specified not including the CDN (Figure 4). In the third edition, Table 6 is used to provide the same information, but it explicitly states that a CDN is to be included when measurements are made (Figure 5). Table 6 also includes a specification for short-circuit current when the (9μF + 10Ω) CDN is used (for line-to-ground testing). In this case, note that the short-circuit current is significantly reduced, due to the 10-Ohm resistor in the CDN.
Open-circuit peak voltage ±10 % Short-circuit peak current ±10 %
0,5 kV 0,25 kA
1,0 kV 0,5 kA
2,0kV 1,0 kA
4,0kV 2,0 kA

 

Image may be NSFW.
Clik here to view.
Fig.4-Gray

 

 

 

 

Fig. 4: Table 3 from 2nd ed.

Image may be NSFW.
Clik here to view.
Fig.5--Gray

 

 

 

 

 

 

 

 

 

Fig. 5: Table 6 from 3rd ed.

In the second edition of the standard, the 10×700/5×320 μs surge waveform is described hand-in-hand along with the 1.2×50/8×20 μs waveform. In some cases within the standard it is not clear which waveform is to be used for a particular test. This is clarified in the third edition: The 10×700 μs impulse is only to be used on external ports that connect to lines, which exit the building (more details on this point later in this article). These external lines are typically longer than 300m. The inductance and distributed capacitance of these transmission lines provide wave-shaping of any real-world transients, such that the equipment connected to the external lines sees a transient that is slower – more like the 10×700/5×320 μs waveform. Further explanation is provided in the new Annex A of the third edition.

New CDN and Calibration Requirements
This new Annex A now contains the full description of the 10×700 μs impulse, including the waveform generator, calibration of the generator, the CDN to be used, and the calibration of the CDN. In the second edition (section 6.2) only the waveform, and calibration of the waveform were described. The new Annex A does not change any requirements other than the waveform duration definition previously mentioned. However, new requirements have been added, especially relating to CDN performance.
In the second edition, calibration of the 1.2×50/8×20 μs generator was described in section 6.1.2. In the third edition this is covered in section 6.2.3, and additional details have been added. The updates provide clarification regarding the type of equipment that should be used to perform calibrations, including specifications for current transformers (if used to measure short-circuit current). Similar details have been included in Annex A regarding the 10×700 μs impulse waveform. Section 6.2.3 also makes reference to Annexes E and G of the standard (both are new in the third edition). Annex E is quite useful, as it includes many figures that show the various waveform measurements in detail (rise and duration) for all of the waveforms.
Annex G is less useful unless one has an advanced degree in mathematics. The purpose of Annex G is to point out the fact that it can be quite difficult to make accurate measurements of single-shot, high frequency events. A common example that may be more familiar to the reader is the calibration of a typical 10x oscilloscope probe. The usual method to adjust the probe is to connect to a square-wave generator, and adjust the capacitance of the probe while observing the waveform on the oscilloscope (usually a screwdriver slot is provided on the probe to make the adjustment). The probe is adjusted so that the wave shape looks “square”: the rise time is as fast as possible with minimal overshoot, or ringing, on the front edge.
Clearly a probe that is not adjusted properly, or a probe-scope combination with a low (poor) frequency response can cause an impulse voltage or current waveform to appear different on the oscilloscope screen than it actually is. So in layman’s terms, Annex G could be summarized as follows: “When measuring impulse waveforms for calibration, make sure that your measurement instruments can capture the true waveform and do not distort the results”. Fortunately Annex G is only a recommended practice (informative), not a requirement (normative).
CDN’s have become a bigger part of the 61000-4-5 standard in the third edition. The flowchart that is used to select particular CDN/test configurations has been updated to reflect newer test practices. Figure 6 shows the flowchart from the second edition; figure 7 shows the same flowchart from the third edition. Additional figures have been added in the third edition standard, which show new test setups, and at least one test configuration (Fig. 13, 2nd ed.) has been eliminated. It is important to carefully study the new test setups to ensure compliance with the third edition.

Image may be NSFW.
Clik here to view.
Fig.6--Gray

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 6: CDN selection flowchart from second edition

 

Image may be NSFW.
Clik here to view.
Fig.7--Gray

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 7: CDN selection from third edition

The third edition adds a peak voltage specification at the EUT port of the CDN (Table 4). The voltage tolerance varies based on the current rating of the CDN. Both the old and new standard include a tolerance specification for front time and waveform duration (Table 6, 2nd; Table 4, 3rd) but the tolerances have been relaxed slightly in the third edition, and the table now goes up to 200A (the second edition went only to 100A). This will probably not affect most users, because most CDN’s and products being tested are rated 16 Amps or less. A related note in the new section 7.3 of the third edition points out that care must be taken regarding the tolerances of the CDN: a high-current rated CDN is allowed wider tolerances, but this CDN can not be used with lower-current rated products unless this CDN meets the tighter tolerance specifications that apply to lower-current CDN’s.
Focus on CDN’s continues with new calibration requirements in the third edition: Section 6.4 for the 1.2×50/8×20 μs waveform generator, and Annex A Section 4 for the 10×700/5×320 μs waveform. In general it is no longer possible to separately calibrate an impulse generator and the CDN; both need to be considered and calibrated together. In the past, the CDN was considered more of a passive component – now the interaction of the CDN with the impulse generator is identified and described, which should allow for more consistent test results for tests performed in different laboratories, or with different impulse test equipment.
Annex F is new in the third edition. It covers measurement uncertainty (MU), specifically relating to impulse waveforms. MU is a topic that that has received more coverage in recent years. Awareness has increased that it is no longer “good enough” to simply trust the calibration sticker on equipment. The user of the equipment is obliged to better understand what parameters are being calibrated, and the effects that variation has on measurements. In the past there were generally accepted “margins of error” that were used on specification limits to ensure compliance even when equipment that is only nominally calibrated is used in testing. More recently, organizations such as the IECEE Committee of Testing Laboratories (CTL) are concerned about measurement accuracy and have published a number of decisions and operational procedures on this topic. This movement is also reflected in the transition to risk-based assessments for some product categories (Medical and Test/Measurement Equipment). Expect more applications of measurement uncertainty and other statistical tools in future standards as well.

Other Updates and Clarifications
Both the old and new standards describe Test Setups in Section 7. This section has changed quite a lot in the third edition, although the changes are mostly for clarification – the requirements are essentially the same. The text changes of Section 7 primarily follow the flow chart changes that were described previously in this article. The third edition adds a new section for verification of test instrumentation (Section 7.2). Basically, the standard now requires that the test setup and resulting impulse waveform be verified prior to connection of the EUT. This methodology has been considered best practice for many years, but now it is required, and therefore must be documented. Another best practice that is now explicitly stated in Section 10 of the third edition is to document the test setup in the test report using drawings and/or photos.
For AC equipment, impulses are applied at 0, 90, 180, and 270-degree phase angles. The third edition provides some clarification for testing three-phase equipment: the phase angle is measured between the two Lines being tested (not Line to Neutral). Also, the new edition points out that when testing from Neutral to Ground, phase matching is not needed (because there should be no voltage from Neutral to Ground) and so this test should be treated similar to DC testing (five positive impulses and five negative impulses).
Section 8.2 of the second edition specifies that testing of secondary protection should be conducted at a voltage just below the breakdown voltage of the protection device (in addition to the standard voltage levels). This requirement was problematic because it required further investigation by test laboratories regarding the equipment design, and in some cases a judgment call regarding the breakdown voltage of protective circuitry. This requirement has been removed from the third edition (missing from Section 8.3) but there is still some ambiguity on this point: In the last paragraph of C.2.2.2 (Annex C) of the third edition, there is a statement that system-level testing should be conducted considering the breakdown voltage of protective components, and voltages adjusted accordingly. However since Annex C is informative (not normative) it is left to the user how to apply the statements in this section.

Clarification of Test Procedures
Annex B (Annex A in the second edition) provides guidance on selection of test voltages for impulse testing. The new Annex B makes clear the distinction between internal and external ports, and which impulse waveform (1.2×50 μs or 10×700 μs) is to be applied. Table A.1 in the second edition has been split up into two tables in the third edition (B.1 and B.2), which makes the test recommendations easier to interpret. A comparison of the tables is shown in Figures 8-10.

 

Image may be NSFW.
Clik here to view.
Fig.8--Gray

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig. 8: Table A.1 from the second edition

Image may be NSFW.
Clik here to view.
Fig.9--Gray

 

 

 

 

 

 

 

 

 

 

 

 

Fig. 9: Table B.1 from the third edition

Image may be NSFW.
Clik here to view.
Fig.10--Gray

 

 

 

 

 

 

 

 

 

 

 

 

Fig. 10: Table B.2 from the third edition

 

In addition to the internal/external distinction, note that there are other changes as well: testing of Installation Class 3 DC systems is no longer required. Also, compare the following text (Figure 11) from Annex B of the third edition to the text below Table A.1 of the second edition (Figure 8 in this article): The selection of the proper impulse waveform is made much clearer.

Image may be NSFW.
Clik here to view.
Fig.11--Gray

 

 

 

 

 

Fig. 11: impulse waveform selection from Annex B (third edition)

 

Annex C in the third edition (Annex B in the second) is essentially unchanged except for one important clarification: DC power ports, such as ports for providing power to a laptop do not need to be tested.
The new Annex H concerns impulse testing of equipment and power lines rated above 200 Amps. This is probably not something that most readers of this article will need to deal with. Since the impedance of such circuits is so low, any energy from an impulse test is likely to be absorbed. This consideration is reflected in Annex H as well.
Summary
In summary, the changes in the third edition of IEC 61000-4-5 are likely to impact any organization that performs impulse testing or calibrates impulse test equipment. Manufacturers of products that are tested to the second edition will most likely not require any product redesign, as the actual impulse tests are relatively unchanged. The third edition should result in a more consistent application of impulse testing, and greater repeatability of test results.
One final comment: both the old and new versions of IEC 61000-4-5 include the following statement: ” Equipment shall not become dangerous or unsafe as a result of the application of the tests…” (see end of Section 10/9 of edition 2/3 respectively). While the statement seems virtuous and straightforward, it complicates matters significantly if it is strictly interpreted. The IEC 61000 standards do not define “dangerous or unsafe”, not do they list any requirements or tests that can be used to determine if the EUT is dangerous as a result of the applied impulse tests. In product safety standards, a product is considered unsafe if it fails dielectric withstand testing, or if there is excessive leakage current. Both of these situations could occur as a result of a component breakdown during impulse testing (clamping of an MOV or GDT for example). The equipment could remain operational, and otherwise have no indication that it is unsafe. Does this mean that EMC test labs are now obliged to perform electrical safety tests after the completion of impulse testing? Hopefully this is not the case.

[1] C. F. M. Carobbi and A. Bonci, Elementary and Ideal Equivalent Circuit Model of the 1.2/50 – 8/20 μs Combination Wave Generator, IEEE Electromagnetic Compatibility Magazine, Volume 2, Quarter 4, 2013.

The post What’s new: IEC 61000-4-5 Second Edition vs. Third Edition appeared first on Interference Technology.


The IEMI Threat and a Practical Response

William Turner

Senior Design Engineer

MPE Ltd

Email: wturner@mpe.co.uk

 

IEMI Threat

With the increasing use of electronics to control so many aspects of modern life, from smart grids to driverless cars, Intentional Electromagnetic Interference (IEMI) is a threat gaining concern. Various initiatives have been set up to address the needs of specific market areas, and new standards are being worked on.

However, to offer protection one must start by understanding what is being protected against and how that compares and contrasts with other EM protection standards. Figure 1 below shows the frequency and comparable magnitudes of the various EM threats. Please note that EMI refers to the typical background EMI that can be experienced from benign intentions such as radio and TV broadcasting, radar, microwave, networking and GPS systems.

Image may be NSFW.
Clik here to view.
Figure 1 - Frequency v magnitude of EM threats

 

 

 

 

 

 

 

 

 

 

 

Figure 1

It can be seen that IEMI differs from most other EM threats in that it typically occupies a narrow frequency band, dependent upon which specific malicious source is being used. This contrasts with other threats such as lightning and HEMP (high-altitude EMP), which are very broadband in nature.

The other notable difference is the area of the spectrum occupied: IEMI-radiated threats are almost never below 10MHz, as the coupling efficiency of such a threat would be much reduced. Instead the frequencies used tend to be much higher, to improve the effectiveness and penetration of any attack. The exception to this is for pulses directly injected into power and communications conductors, where lower frequencies are able to travel long distances with minimal attenuation.

Methods of Threat Delivery

The biggest problem with protecting against IEMI is that the sources can vary massively between different aggressors and the way any attack is launched.

IEC 61000-4-36 is the standard for IEMI immunity test methods for equipment and systems and should be considered essential reading for anyone attempting to protect against IEMI. IEC 61000-4-36 defines categories of aggressors as Novice, Skilled and Specialist. These definitions are based on their capability, and IEC 61000-4-36 gives examples of the types of attack one could anticipate from those categories.

Generally Novice attacks will be short-ranged or require some direct access and take the form of technologically very simplistic and low-cost methods such as modified microwave ovens, ESD guns or even EM jammers that can be bought online for a hundred Euros. Although unsophisticated, such attacks should not be underestimated and could easily cause persistent disruption or damage without leaving an evidence trail of an attack. An example of what can be constructed from rudimentary everyday components is shown in Figure 2.

Image may be NSFW.
Clik here to view.
Figure 2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 2

The next category of skilled aggressors comprises those with good understanding and experience or who have access to commercially available equipment. That equipment could be something like the Diehl pulser pictured in Figure 3.

Image may be NSFW.
Clik here to view.
Figure 3 (Diehl pulser)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 3

This is an off-the-shelf “interference source” capable of emitting a 350MHz damped sine wave output and 120kV/m at 1m continuously for 30 minutes. With an appropriate antenna, it would be capable of disruption or damage at a greater distance.

In the Novice and Skilled categories one could also anticipate conducted attacks where access is possible, involving direct pulse or continuous wave injection onto the power and/or communication lines. These should not be underestimated and can have huge impact on systems, with effects such as: triggering of safety protection devices or disruption of switched mode PSUs, causing power cuts as well as physical denial of services (DoS) by flooding xDSL or ISDN systems. The ultimate threats are high-power pulses that bring about physical damage to equipment.

The third category of Specialist is in the realms of research laboratories and high-end military programs with accordingly high capabilities. This covers systems such as the Boeing CHAMP missile and the Russian-developed RANETS-E, which is capable of a 500MW output and range of 10km. Plentiful information on both systems is available in the public domain. Although it would be obvious if a large truck with antenna was parked outside, or a missile had been launched overhead, a Specialist aggressor’s equipment can be much more subtle than that, especially if fixed equipment can be set up nearby – in a building across the street or even an adjoining room. This allows complex equipment to be set up and an attack to go unnoticed for a long time, or perhaps not be noticed at all.

This raises the most critical question concerning protection from IEMI – access. Access is in terms of distance either from threat to target in radiated systems, or to incoming power and communications cables for injected conducted disturbances.

Effects on Operations

Numerous papers have been written on the disruptive and damaging effects of IEMI attacks on electronic systems, and covering that in detail is beyond the scope of this paper. Readers are encouraged to review the many papers and presentations on the subject.

What can be said here is that the effects can vary from the very subtle – errors in data streams and microprocessor instruction operation through to system lockups, hard resets and even permanent damage which renders a system beyond repair.

The exact effect of a particular aggressor’s action against a particular system is very case-specific and would require thorough analysis. However there is one general rule that applies, and it may appear obvious: the greater the interference, either as a conducted or radiated disturbance, the more likely effects will be seen and the more severe they will be.

It has been shown many times that a radiated or conducted disturbance will cause damage at higher power levels, but at lower power levels can cause only minor upsets or even no significant effect at all. This makes disturbance attenuation the key to protection.

Asset Protection

While the internal resilience of equipment is a key part of IEMI protection, it is known to vary even between equipment made by the same manufacturer. So often it is not possible to influence that characteristic, especially where third-party equipment is concerned, so one must look instead at how those assets can be protected by external measures.

As can be seen in Figure 1, there is little frequency overlap between traditional threats and IEMI. One should bear this in mind when planning the protection strategy for a system. However it does not mean that existing protection systems or even infrastructure are completely useless, just that they shouldn’t be considered the whole solution.

What one does need to consider is the type of IEMI threat likely to be experienced. For example, it is unlikely that a small company in the UK will suffer an attack from a Boeing CHAMP missile directly overhead, but it’s plausible it could be subject to interference from a malicious individual with some pulse generator plans from the internet. It’s plausible that a company of national significance could be subject to organised terrorists, with whatever equipment and skills their organisation possesses.

Bearing this in mind, there are different strategies one could adopt for protection. The obvious and technically naïve strategy is to assume that, because all equipment must be to the standard of the EMC directive, it is adequately protected. However the various EMC directive immunity tests are all significantly below the levels and frequency that could be experienced during an IEMI attack (V/m against kV/m), and typically EMC directive conducted compliance focuses on the lower bands – where SMPS and similar switching noise problems exist that do not arise at the higher bands where most IEMI threats exist. ESD protection only has limited relevance: as it only mandates no permanent damage, disruption is acceptable.

The second approach is to go to the other extreme and apply the traditional metal box / Faraday cage solution shown in Figure 4, as often seen in high-end military applications and EMC test chambers. This assumes no inherent resilience in any equipment and is the same strategy adopted for MIL-STD 188-125 HEMP (nuclear EMP) protection on critical military infrastructure, where even a minor disruption isn’t tolerable. For IEMI protection applications where that same ‘work-through’ requirement exists, then this really is the only guaranteed solution: one would simply need to ensure that the shield performed up to at least 18GHz and the same for the filters on incoming power and communications lines.

Image may be NSFW.
Clik here to view.
Figure4--Turner

 

 

 

 

 

 

 

 

 

 

Figure 4

As confirmation of this principle, MPE recently tested their filters against the Diehl pulser pictured in Figure 3 to try out the hypothesis. As shown in Figure 5, the LEDs were positioned both inside and outside the shielded cabinet. At this stage it was only a qualitative test, with the power source outside filtered using one of MPE’s HEMP filters. The effects were very clear, with no LEDs being damaged inside the cabinet even at very short ranges from the Diehl source: however most of the LEDs outside suffered failure at this and greater distances.

Image may be NSFW.
Clik here to view.
Figure 5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 5

There are plans to do more detailed quantitative tests against this and other IEMI sources, including the often touted modified microwave oven. However, knowing that the same filter construction has been proven in 40GHz filtering / shielding applications and the energy from IEMI is still below that of MIL-STD 188-125 (150kV 2500A conducted), the outcome is expected to again be positive and to show that standard MPE HEMP filters also protect against IEMI. The assessment is likely to take a similar approach to that of HEMP filter testing described in IEC 61000-4-24, where residual currents and voltages are measured on the protected side of the filter against a known incoming pulse.

For lesser applications taking this approach, one would only need adequate shielding and filtering to the appropriate level for the anticipated threat. The reality is that such a shield wouldn’t be worth providing unless it was giving at least an overall 60dB reduction. This approach could be scaled appropriately to what is desired to be protected: if only a server cabinet is deemed critical, then only that needs shielding and filtering. The downside of such protection is the cost – for a cabinet alone, it could run to over £1000.

Protecting a large, high-end military facility can cost in excess of £100,000 in filters and more than £1m in shielding and architectural work, even if done at the point of construction. Retrofit would add even further to the costs. Such a facility would also require significant maintenance, adding to the bill. This cost can be very off-putting for all but the most critical of applications.

Another approach to the problem is to assess what protection is already there, the threats that are likely to be a problem, what really needs protecting, and to apply a staged protection scheme.

This concept doesn’t rely on a single component providing huge signal attenuation, but on multiple smaller and often incidental components to give a similar attenuation at a much reduced cost. The concept is shown in Figure 6. This is a tailored solution to suit individual scenarios and equipment.

Image may be NSFW.
Clik here to view.
Figure6--Turner

 

 

 

 

 

 

 

 

 

Figure 6

It is here where the EMC directive (and other regulatory EMC standard) immunity tests become useful: they provide a good baseline for building upon with other protection methods. Caution should be exercised here, as there is a danger of “building on sand”. The EU “CE” mark is a self-certification system, meaning that a CE mark is only as trustworthy as the company placing the mark upon the product.

One only has to look at the many analyses of USB phone chargers and LED lighting systems to know that many products do fall far short of the standard (not just for EMC) when put to test. Assuming that the regulatory immunity can be trusted, then a typical attenuation of 60dB might be required from perhaps 10MHz to 1GHz. It becomes less clear above this frequency, as many items of equipment stop testing at 1GHz, and so the base equipment immunity is often unknown above this.

The next asset in the protection scheme also comes for free – the architecture around the system. Several studies have shown that some buildings can provide up to 20dB of shielding, while others provide almost nothing, the difference being due to the materials used and their construction style.

For instance, concrete rebar can give 11dB of shielding, yet wooden buildings would do well to give 4dB. As with all areas of IEMI, details and specifics can make a huge impact, for instance a metal clad building may appear to offer a rudimentary Faraday cage, but if unfiltered conductors are penetrating that cage, its benefit can drop from what would be 30dB to -10dB, creating a stronger field inside the building than outside. In this case applying appropriate filtering would rectify the situation and provide a solid 30dB. Note that these figures are for particular frequencies, and a proper study of the specific case should be done, with measurements taken if necessary.

The distance between a potential aggressor and a protected system should not be underestimated either and could be quite long relative to the wavelengths used in an attack. If the site has an extensive perimeter with security, or only a specific room needs to be protected in a large building or complex, this gives a natural attenuation to any radiated or conducted attack originating off-site.

As an example of the benefits of distance, basic RF theory tells us a 1GHz radiated attack could be attenuated by more than 50dB over just 10m. This is a practical, controlled perimeter distance for many sites, but caution is advised as this simple illustration is based on isotropic antenna gain and should be considered in that context.

Equipment cabinets and cases can also have protective capability. A typical commercial EMC cabinet compared to an unshielded rack could provide a consistent 30dB of attenuation up to1GHz and could still be providing some up to perhaps 5GHz.

The conducted protection should try to coincide with the shielding to avoid bypass coupling and prevent any compromises to the inherent shielding protection. If the building has very good shielding, then a large incoming filter at the entry point would be best. But if shielding is very poor or with potential access issues, then the cabinet or individual equipment must carry the majority of the shielding, and this is where the filtering should be located.

Distributed filtering can be used with several lower performance filters in place of a single high-attenuation filter. Some of those filters could be part of the original equipment, but bear in mind that, although most equipment has incoming power filters, these are often only low frequency for EMC compliance and not really suitable for IEMI protection. Furthermore the combination of filters in the system should cover the entire frequency spectrum of concern. This requires assessment against the probable threats and tolerable disruption: there is a standardised way to define these in the appendices of IEC 61000-4-36.

A vital part of the filtering solution is the surge suppression performance against pulse-type IEMI attacks, which can have very high power content and fast rise times. Those rise times can be in the order of nanoseconds or even picoseconds, billionths or trillionths of a second.

Compare this to the most common type of surge suppression – lightning protectors, typically spark gap or MOV varistor types. These typically only need to operate in the microsecond timescale for lightning: although some of the technologies can operate far faster than this, in practice they don’t when used in lightning applications, due to many factors including installation and connectivity styles. This makes any lightning protection very ineffective against IEMI, except for the very slow conducted pulses, i.e. those already in the lightning area of the frequency spectrum.

This is where the crossover with HEMP is important: the MIL-STD 188-125 E1 pulse also has a fast rise time in the nanosecond scale and energy content far exceeding that of any likely IEMI attack. As the performance won’t suddenly cease at the top of the HEMP spectrum, this means that a MIL-STD HEMP protection device will protect against all but the fastest conducted pulses seen with IEMI threats. Nevertheless MIL-STD HEMP devices, as previously discussed, are expensive and quite likely excessive in all but the most sensitive and critical cases where HEMP protection is also likely to be a concern.

Therefore in most cases what is desired is in effect a lower cost and performance HEMP filter, with performance stretching to at least 18GHz. Fortunately the update of IEC 61000-4-24 is nearing publication. It will define a range of performance criteria for HEMP protection on civilian applications which are based on more relaxed residuals than the MIL-STD (it also includes the MIL-STD as the special case) but are still required to respond to the same nanosecond timescale pulse.

This provides a good basis for specification of IEMI surge suppressors and conductor filtering, as it requires demonstration of all the key attributes – fast pulse response, prevention of shielding bypass and ability to handle the power levels expected during such an attack.

Threat Detection

If the system in question can tolerate interruptions or damage without serious unrecoverable consequences, and the business case is not currently strong enough to invest in protection, there is an intermediate step before protection that is complementary to it even when installed.

This takes the form of detection of any incidents and profiling it in the specific scenario, with an aim to gather evidence for the purposes of the cost/benefit analysis of protection systems – and for logging IEMI attacks or disruptions in order to positively identify threats against system faults. This has the added benefit of logging unintentional EMI effects in the increasingly crowded spectrum.

This approach has only become viable recently thanks to a shift in the philosophy of detection systems. Traditional IEMI monitoring equipment is very large, expensive and complex, requiring highly skilled staff to operate. These can give a full profile of any attack or threat detected, with analysis of the specific source in real time, etc. However the cost and maintenance of such a detection system can approach or exceed that of system protection, making detection a costly intermediate step for general use.

To make logical sense, what is required is a detection system of lower cost and complexity. This differs from the traditional detection approach by simply detecting anything that causes a large enough EM disturbance and logging it in the time domain.

By logging the disturbance in enough detail in the time domain, offline analysis can then be performed as shown in Figure 7, removing the need for complex analysis, and thus cost, within the detector. By keeping the costs low, large sites could deploy multiple detectors, giving a far more detailed view of the threat. Information that this could give to the analyser includes increased accuracy on wave shape and triangulation of the threat source, and attenuation provided by existing buildings, infrastructure or shielding.

Image may be NSFW.
Clik here to view.
Figure 7 - Analysis screen

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 7

This solution gives the two desired outcomes from detection: an evidence trail for any cost/benefit assessments for stakeholders to invest in protection, and the time-stamping of disturbances, to be correlated with any CCTV or other evidence in legal proceedings.

Summary

It can be seen that the IEMI threat is real regardless of application – whether in security or defence, public or private sector – and that existing protection systems cannot be assumed to be adequate and in most cases will be found wanting by a well-planned attack.

The steps required to effectively and adequately protect against the risk of IEMI are clear – understanding the nature of the threat, taking advantage of existing protection systems and supplementing them with IEMI-specific measures where necessary.

The post The IEMI Threat and a Practical Response appeared first on Interference Technology.

Is EMC prepared to handle the challenges of the Internet of Things?

By Gunter Langer, Langer EMV-Technik GmbH

Contact: k.langer@langer-emv.de, www.langer-emv.de

The number of mobile devices such as smart phones, tablets and wearables has risen significantly over the past years. At the same time, wireless communication has increased due to higher data rates. Will the growing number of wireless devices multiply the EMC problems? Is today’s industry able to cope with the EMC requirements that the Internet of Things has in store for us?

 

If more devices have to interact with each other and their EMC quality remains at the present level, this will lead to more EMC problems from a statistical point of view. Furthermore, a device may be incompatible in practice even though it has passed the compliance test. Let us assume that an electronic device has passed the emission compliance test according to IEC 61000-6-3, IEC 61000-6-4, for example. In contrast to the test, the electronic device may be located near a metallic object such as a housing in practice. This may lead to field coupling which in turn results in higher emissions than in the test. The dimensions of the metallic object are essential in this context. The field may stimulate standing waves that fit the dimensions of the metallic object and then cause additional emissions.

 

This means that in future, not only will wireless transmission problems arise but also problems due to emissions from devices.

 

Stricter device standards will not necessarily solve this compatibility issue.

The example above shows that the current compliance tests usually do not take any field coupling mechanisms into consideration. The field coupling mechanisms may induce some helpful ideas on how to solve the problem.

 

It remains to be seen whether the measuring principles specified by the current standards are sufficient or whether new measuring principles will have to be developed.

 

Furthermore, new requirements are emerging in the field of EMC standards for ICs (IEC 61967 and IEC 62132).  Concrete IC EMC parameters will be needed as input values for EMC development tools / simulation programs for PCBs in future. It would be sensible to obtain these EMC parameters from measurements according to IC EMC standards. Unfortunately, the results of standard measurements are currently inadequate for such a purpose.

This procedure will become more important for IC development in the future.

 

These are the reasons why one should consider adapting the test methods of the standard measurement to this task. This will be shown below for ICs by taking conducted emissions as an example.

 

The interference suppression strategies currently used in electronics development come up against their limits. ICs as potential emission sources are not noticed as troublemakers until the first development sample has been completed. The developer comes across them when taking interference suppression measures in the device or on the PCB. Near-field probes are used to locate RF sources in the electronics. These do not identify the IC itself as the disturbance source but PCB traces into which the IC feeds disturbance currents and disturbance voltages. The electronics will then be modified with additional components, copper foil or other means. Last but not least, EMC measurements are carried out to confirm the success of the interference suppression measures taken after the redesign of the PCB.

This approach is very time-consuming and expensive. One big problem here is that selective EMC measures cannot be taken until the first functional development sample has been completed. Insights, which could be crucial for EMC, are gained when it is too late.  Important decisions are taken in the development process without considering the results of the EMC test. Problems are almost inevitable because the EMC test results are obtained at such a relatively late point in time.

 

However, the industry demands faster and more efficient developments in compliance with EMC. This can only be achieved by taking a completely new approach.  This has to begin early on in the development process and delves deep into the emissions’ chains of action. Only hands-on knowledge of the emission source allows the developer to follow this path. Once ICs have been described more precisely as potential sources of emissions, appropriate measures can be taken much earlier and more efficiently to stabilize the whole device’s EMC.

 

Appropriate EMC parameters are a prerequisite and are thus subject to high demands. They have to describe the EMC problem zones of the ICs for practical use in industry. This means they must be suitable for the development of PCBs in compliance with EMC requirements. In addition, the IC’s EMC parameters must be linked with practical measures and strategies.

 

This approach should define electronics development in terms of EMC. On account of extreme miniaturisation, a higher susceptibility to electromagnetic disturbances is experienced in the field of device development today. The device manufacturers make increasing efforts to address the problems so as to suppress interference in devices and comply with the corresponding standards.

The problems described in the example above aggravate the situation even more. An important requirement for the Internet of Things in particular is that the devices function properly and reliably in their environment.

The extent to which device manufacturers can continue to master the EMC situation, which is aggravated on account of the miniaturisation, and to suppress interference in devices by spending more time and money on this work remains to be seen. Development in compliance with EMC requirements will represent an increasing share of the costs in device development. It is doubtful whether the EMC objectives will be able to be fulfilled at all. Providing better EMC parameters in the fields of IC research and IC development in future can mitigate this problem. However, this means that more time and money will have to be spent here too. Of course, this relates to the wireless devices. German industry has started to respond to this mounting pressure.

Companies now work together with EMC advisors in solving EMC-related problems in the development of devices and complex systems by using new EMC technologies from the very beginning of the development process.

Main part

 

Due to internal functional operations, ICs generate RF voltages, currents and fields.  Different physical mechanisms are responsible for these entering cable harnesses in the form of emissions or the surrounding open space in the form of radiation. ICs may have the following effects:

 

  1. Conductive: emission of RF currents and voltages via the IC pins into the PCB traces,
  2. Capacitive/inductive: emission of E- and H-near fields from the die or connections of the IC,
  3. Radiative: direct emission of electromagnetic waves. Direct emissions are usually only crucial in the Gigahertz range for ICs with very high clock rates in practice.

 

   Image may be NSFW.
Clik here to view.
Figure 01 IC E D Erregung -3 enRGB
Figure 1

Electric field of a PCBtrace

 

The following section describes Item 1 and 2: conductive, capacitive and inductive effects in the PCB.

 

Emissions follow a closed loop. The driving RF-current and RF-voltage sources are located inside the IC. They drive RF into the PCB traces via the bonding wire, lead frame and pin where the current generates magnetic near fields and the voltage generates electric near fields. The electric and magnetic near fields would build up undisturbed if a PCB trace were to be freely positioned in space. The fields are similar to the E-fields and H-fields of an antenna. The electric field is closely coupled to the magnetic field via the antenna element, its current and voltage. This electric field pattern results in the emission of electromagnetic waves. The PCB trace acts as a transmission antenna.

 

The situation, however, is usually quite different on the PCB.  The PCB contains metal surfaces. These metal surfaces usually extend over the entire PCB and have ground or supply voltage potential. The gap between these metal surfaces and the PCB traces is normally < 1 mm. These ground surfaces affect the distribution of the trace’s electromagnetic field. The effect can best be described by taking a loop antenna as an example. A loop antenna can emit electromagnetic radiation if positioned freely in space. If the loop antenna is placed on a ground surface, this will prevent the emission of electromagnetic radiation. This is because the corresponding conductive metal surfaces block the magnetic field in the opening of the loop an account of current / field displacement effects (skin effect). The loop antenna’s magnetic field can no longer build up around the antenna and is practically no longer present. Radiated emissions from the loop antenna are thus reduced considerably (Figure 2).

 

Image may be NSFW.
Clik here to view.
Figure 02 2015.11.19 Antenne Magnetfelder_Metallfla¦êche_enRGB
Figure 2

Blocking of a loop antenna’s magnetic field by a metal plate. While the magnetic field is blocked, the loop antenna’s near fields can stimulate the metal plate to radiate emissions (other radiation characteristics). If the gap between the loop antenna and metal plate is zero, the H-field is also zero.

 

The PCB trace reacts in precisely the same way. Direct emissions from the trace are prevented as soon as the ground surfaces in the PCB are large enough. Emissions from the trace will not increase until this is at a certain distance from the ground surface. The required distance depends on the length of the trace. Practical experience shows that the gap must be > 0.5 cm to cause any effective emissions (frequency range < 1 GHz) with a trace length of > 10 cm.

This means that emissions take other ways from a PCB, namely via its near fields.

These near fields cause emissions through interaction with metal parts (Vdd/Vss surfaces, large metal components, cables and lines, metallic structural parts).

 

Relationship between IC voltage and emissions

 

We refer to the PCB trace in the following text. The traces inside the IC follow the same principles. The statements on the PCB trace can thus be transferred to the traces inside the IC. The pin voltage which is present on the PCB trace or the trace inside the IC builds up an electric field around this trace (Figure 1). Most of the field lines lead to the PCB’s GND surface. Only a few field lines leave the PCB vertically upwards and penetrate into open space. The closer the trace is to the edge of the GND system, the more field lines penetrate the space.

These field lines (excitation field lines) leave the PCB’s GND system and carry displacement current through space which stimulates the entire metal system (PCB with cables and metallic structural parts) to vibrate electrically (Figure 3).

 

 

 Image may be NSFW.
Clik here to view.
Figure 03 ED-Gespeiste Antenne 2004.04.29 enRGB
Figure 3

Stimulation of radiated emissions via electric excitation field lines

 

The standing waves on the metal system may cause emissions.

The electric excitation field may reach metal parts (cables, structural parts, shielding plates, ( Figure 4) located opposite the PCB and these may be stimulated to vibrate electrically by the transferred displacement current.

 

 Image may be NSFW.
Clik here to view.
Figure 04 ED-Gespeiste Antenne 2004.04.29 Umgebung_enRGB
Figure 4

Overcoupling of excitation field lines to neighbouring metal parts

 

Relationship between IC current and emissions

 

The IC’s current loops can either be located inside on the die or loops can be formed by the IC’s pins. These loops run through the ground system of the PCB, pin, lead frame, bonding wire and die. This type of loops can be formed via Vdd or Vss pins, for example. The Vdd / Vss loops that penetrate to the outside may be much larger than the loops located inside the die. The larger outer loops can generate a stronger magnetic field and are usually responsible for the highest emissions.

 

We refer to the PCB trace in the following text. The traces inside the IC follow the same principles. The statements on the PCB trace can thus be transferred to the traces inside the IC.

 

 Image may be NSFW.
Clik here to view.
Figure 05 HB-Gespeiste Antenne 2004.04.29 Umgebung enRGB
Figure 5

Stimulation of radiated emissions through mutual induction

 

The pin current, which flows into the PCB trace, builds up a magnetic field H2 (Figure 5). The returning pin current also generates a magnetic field H1 in the GND system (Figure 6). It is assumed that the PCB ground is a metal surface, which extends over the entire PCB. The trace is so close to the ground that it can usually only generate insignificant emissions, as in the loop antenna example above. The field H1 of the returning current induces a self-induction voltage UErr. in the GND plane of the PCB (metal surface). This voltage drives cables and structural parts that are connected to it like an antenna. The cables and structural parts emit electromagnetic waves as a result.

 

 Image may be NSFW.
Clik here to view.
Figure 06 HB-Gespeiste Antenne 2004.04.29 enRGB
Figure 6

Stimulation of radiated emissions through mutual induction

 

The magnetic field H2 (Figure 5) of the trace cannot generate any radiated emissions in open space. This is due to the fact that the trace is close to the ground plane, similar to the loop antenna example above, thus preventing emissions. There is another chain of interactions that causes the magnetic field to radiate emissions. This is similar to the one described for the field H1 above. A metal part has to be inserted into the field H2 for this purpose. An excitation voltage is only induced there via mutual induction if the magnetic field encloses a metal part. The excitation voltage stimulates the metal part to act as an antenna. The metal part emits electromagnetic waves. A steering column, a metal strut or a cable in the PCB’s neighborhood in a vehicle is taken as an example.

 

 

EMC parameters for IC pins

 

The IC pin current and IC pin voltage are the pin-related EMC parameters of an IC. The IC’s electric near field and magnetic near field are the field-related EMC parameters of an IC. All four parameters (u, i, E, H) of the IC have to be detected by suitable measuring devices.

The electric near field of the PCB traces is proportional to the pin voltage and the magnetic near field of the conductor loops of the PCB is proportional to the pin current of the IC. The pin current and pin voltage depend on the load to which the pin is subjected through the connected PCB trace.

 

The values of the cases in which the highest pin voltage and the highest pin current are generated have to be used for the IC parameters.

The current and voltage of the traces depend on the driving voltage in the IC and on the impedance of the load on the PCB traces.

 

The maximum possible pin current is measured if the pin is operated under short-circuit conditions. The maximum possible pin voltage is measured if the pin is operated under no-load conditions (open circuit). The maximum possible values have therefore been determined and all values from practical operation (determined in a large number of measurements on different PCBs) are equal or smaller.

 

The voltage, and thus the electric near field, is highest under open-circuit conditions in the PCB traces in special cases. The emission potential is then at its greatest.

 

The corresponding EMC parameter of the IC is its open-circuit voltage Ul(f). The magnetic near field is proportional to the current flowing through the trace. The current depends on the IC’s driving voltage and the load of the trace. A short circuit may occur in special cases. The current, the magnetic field and thus the emissions are then at their greatest.

The corresponding EMC parameter of the IC is its short-circuit current Ik(f).

 

The maximum pin current and pin voltage values (Ul(f), Ik(f)) are produced under short-circuit or open-circuit pin conditions. In these cases, the highest emissions are generated via the coupling mechanisms described above.

 

Hence, it follows that each pin of an IC has its own EMC parameters for conducted emissions. An IC pin’s EMC parameters are its open-circuit voltage and its short-circuit current.

 

 Image may be NSFW.
Clik here to view.
Figure 07 mod dig08 Signal u-i -02 enRGB
Figure 7

Measurement of pin parameters close to short-circuit and open-circuit conditions

 

The open-circuit voltages and short-circuit currents can be determined for most pins of the IC through measurements under close to open-circuit and short-circuit conditions. Two spectra for each pin result in 128 spectra for a 64-pin IC, for example. Furthermore, the pin can have different switching states (input, output H, L and high-impedance). The internal function may also assume different states (Clk-PLL OFF/ON).

 

The current in the power supply pins is measured according to the 1 Ohm method. If the resistance of 1 Ohm is too great, a 0.1 Ohm measuring resistor is used. This measurement can be carried out in both the Vdd and Vss. A corresponding high-impedance probe and a decoupling capacitor can be used to measure the RF open-circuit voltage on crystal oscillator pins. The crystal oscillator’s filter capacitor may serve as a decoupling capacitor.

The measurements may produce a large amount of data and become difficult to manage. A 3D representation provides a clear overview of the results (Figure 8). A custom-developed measurement set-up, with a corresponding software (ChipScan ESA), allows a semi-automatic recording of the pin spectra. The results are visualised in 3D. The representation can be switched over to 2D for selected pins (Figure 12).

 

Use of IC parameters

 

The 3D spectra clearly reveal the problematic pins for practical applications. Open-circuit voltages in the range of 80 dBµV can lead to limit-exceeding emissions over trace lengths as short as approx. 10 mm (particularly problematic in automobiles). The critical frequency range can be read from the 3D – 2D spectrum. Figure 12 shows this for the crystal oscillator pin 15. The critical frequency range extends up to 600 MHz. The layout and design can be steered in the right direction on the basis of the EMC parameters of the IC pins to save time and money. There will be ICs where individual pins display high values in terms of conducted EMC parameters of the emissions. These values provide helpful advice on how to use the IC on the PCB in a compatible way. Consequently, these ICs need not be excluded from developments. IC users should determine the IC’s EMC parameters before they start developing a PCB.

 

If ICs are integrated without this information (as it is still common practice today), problems will not arise until the first development sample has been measured. This entails high costs for time-consuming interference suppression measures (layout changes, design modifications, etc.). This approach also permits an IC to be chosen from a range of alternatives because it will most likely cause lower emissions, and hence it will be easier and less costly to make its PCB assembly EMC compliant.

 

Two new helpful tools can be created for electronics development on the basis of the IC’s EMC parameters:

 

  1. Pin-related open-circuit voltage and short-circuit current spectra (3D / 2D)
  2. Layout and design tips in conjunction with the EMC parameters of the IC pins

 

An EMC specialist can derive such design tips (counter-measures) from the pin spectra, interactions (items 1 and 2) and the character of the special application. However, design hints and tips should better be provided in the form of pin information in practice. The EMC parameters Ul(f), Ik(f) of the IC pins can be grouped in frequency-dependent level ranges with different risk potentials. A certain barrier of design measures has to be built up depending on the risk potential. This strategy will be the basis of EMC activities over the years to come.

 

 Image may be NSFW.
Clik here to view.
Figure 08 AZ60A TS701 3D_ChipScan-Scanner 3.0.11_enRGB
Figure 8

Open-circuit voltage of the test IC 01

 

 

Examples of pin-selective counter-measures in terms of open-circuit voltage:

The static port pins 16 to 35 (Figure 8) show high open-circuit voltages. This leads to emissions via the electric field if several port conductors are connected to PCB traces. As a counter-measure, the traces should be well enclosed by GND and not be located at the edge of the PCB.

 

 Image may be NSFW.
Clik here to view.
Figure 09 AZ60A P623 3D_ChipScan-Scanner 3.0.11_enRGB
Figure 9

Short-circuit current of the test IC 01

 

Examples of pin-selective counter-measures in terms of short-circuit current

 

The port pins 16 to 35 also provide relatively high short-circuit values (Figure 9). Filter capacitors located further away can generate critical current loops. As a counter-measure, the filter capacitors should be located in the vicinity of the IC or series resistors should be inserted.

High values are obtained for the supply pins 12, 13 in the lower frequency range (< 100 MHz) and pins 50, 51, 52 in the medium frequency range (around 500 MHz). As a counter-measure, the current loop that passes over the blocking capacitor can be attenuated with a resistor (< 10 Ohm) or a soft ferrite. The blocking capacitors and the IC should not be too close to the edge of the PCB (> 20 mm). The IC should be positioned so that the IC current loop is orthogonal to the PCB’s longest axis. This holds particularly true for PCBs that are not wider than 50 mm. The orientation of the IC current loops can be measured with field probes designed for RF field measurements on ICs and provided as IC EMC field parameters.

 

Measurement systems for EMC parameters of IC pins

 

Figure 10 shows the measurement set-up for pin current and pin voltage measurements.

 

Image may be NSFW.
Clik here to view.
Figure 10a Schnitt Testbord 01 -01 P701_bearb enRGB
 Image may be NSFW.
Clik here to view.
Figure 10b Schnitt Testbord 02 Stromweg 02 A1 P701 enRGB
Figure 10

Measurement system for pin current and pin voltage

 

The test IC (DUT) is placed on a test board which is embedded in a ground plane. This provides a continuous GND surface as a prerequisite for measurements up to the GHz range.

 

A (voltage or current) measuring probe whose tip can be moved easily to contact each pin is placed on the GND plane. The measuring path (IC – pin contact – probe) is only a few millimetres long so that the measurement can be carried out at a short electrical distance. The IC is supplied and controlled by the connection board via filters (Figure 10). The connection board is integrated into the ground plane

 

Practical example

 

 Image may be NSFW.
Clik here to view.
Figure 11 Neztznachbildung REMI - CD_01, MFR4200_enRGB
Figure 11

Measurement of an IC 02 application with a simulated on-board power system. Limit value violation of 24 dB at 120 MHz. Cause: E-field coupled out of a trace connected to the IC 02

 

Figure 11 summarizes the results of a measurement on vehicle components. The limit value violation of 24 dB occurs at 120 MHz due to an E-field. This problem was not discovered until the development sample was tested. A measurement of the open-circuit voltage Ul(f) of the IC pin as one of the IC EMC parameters reveals the cause.

Image may be NSFW.
Clik here to view.
Figure 12a MFR4200 03-3D_ChipScan-Scanner 3.0.11 enRGB
 

Image may be NSFW.
Clik here to view.
Figure 12b MFR4200 03-2D_korr_enRGB

Figure 12

Open-circuit voltage measurement on IC 02 in 3D and 2D

 

Exceptionally high voltages (approx. 80 dBµV at 120 MHz) were measured on the IC pins for the crystal oscillator in a 40 MHz grid (shown in black in Figure 12).

All lines and metal parts connected to these pins emit an electric field as described under Item 1 of the physical mechanisms.  The electric field is exceptionally strong and causes the PCB and the cable harness to vibrate electrically.

This means that the field is coupled out via:

 

– the bonding wires and lead frame of the IC pins that lead to the crystal oscillator,

– the 15 mm PCB traces from the IC to the crystal oscillator,

– the crystal oscillator housing and crystal oscillator wiring 3 x 0603 SMD components.

 

A suitable remedy in this case is to reduce the surface of these metal parts,  i.e. to shorten the traces and embed them in GND, to use smaller crystal oscillator housings. However, these counter-measures are not sufficient in our example. The open-circuit voltage Ul (f) of the pin is so high that the metal surface of the bond wire and lead frame is large enough to cause a limit value violation during the component measurement. Filter capacitors cannot be used to reduce the voltage on the crystal oscillator. An E-field shielding directly above the IC can be used as a final remedy. Figure 13 shows the positive results achieved thanks to these counter-measures. The limit values are met.

 

 Image may be NSFW.
Clik here to view.
Figure 13 Neztznachbildung REMI - CD_07, MFR4200z_enRGB
Figure 13

Verification measurement after the counter-measure was taken: the IC 02 shielding prevents E-field from being coupled out.

 

The EMC characteristics of ICs can already be determined today. It is useful if values obtained are entered in product data sheets. This information allows the developer to already plan EMC measures that are necessary for the PCB during the development process,  so that in principle they can use any IC. Test methods to determine the IC EMC parameters enable the IC manufacturer to develop ICs more efficiently.

 

Due to the continued miniaturization of modules and the high number of very complex electronic devices, the EMC assessment of ICs is a valuable prerequisite for the future development of electronic devices. The use of IC EMC parameters will also have a positive effect on the development of the Internet of Things.

 

Information to be noted by the editorial office:

Should you change the manuscript, we would kindly ask you to send a galley proof to k.langer@langer-emv.de so that the author can correct this before the contribution is printed.

We would like to point out that we transfer to your editorial office the non-exclusive right of use (section 31 subsection 2 of the German Copyright Act) with the terms of this to the contribution (text, graphics, photos) for the publication and distribution (print + online, unrestricted in terms of time, place and content, no resell, no forwarding) in the English language.

Langer EMV-Technik GmbH reserves the right to use the author’s version of the contribution, or also your contribution with a respective link, for the Internet presentation of Langer EMV-Technik/Specialist articles after a waiting period of 30 days after the external date of publication.

Langer EMV-Technik GmbH also reserves the unrestricted right to publish the author’s version in all languages and every time.

Should you be able to make a pdf file or an Internet link available to us at a later date, we would appreciate receiving notification from you to this effect.

 

The post Is EMC prepared to handle the challenges of the Internet of Things? appeared first on Interference Technology.

Assembling A Low Cost EMI Troubleshooting Kit – Part 1 (Radiated Emissions)

By Kenneth Wyatt

Wyatt Technical Services LLC

Those of us who are either in-house or independent EMC consultants can benefit greatly by assembling our own EMI troubleshooting kit. I’ve depended on my own kit for several years and it has proven not only valuable, but depicts a sense of professionalism in dealing with your own product development engineers, their managers, or your clients, as the case may be. Mine is designed around a Pelican 1514 roller case (http://www.pelican.com) that includes a padded divider, so it is easy to transport to the area needed. You’ll also want to order the optional lid organizer, model 1519, for carrying extra tools, cables, and other small parts. See Figure 1.

This article will summarize what I’ve included in my own kit, and because everyone’s needs might be a little different, you’ll want to use this information as a guide. Feel free to add or subtract tools and test equipment as desired. You should expect to spend about $3k to $5k for the complete kit, depending on whether you make a lot of DIY probes or buy commercial, but this price range includes a spectrum analyzer.

I’ll list just the most important items for assessing radiated emissions in Part 1. You’ll be able to download Part 2 at the end of this article, which will include additional items required for assessing various immunity tests, along with many other useful tools and equipment. Some of this information is based on the book, EMI Troubleshooting Cookbook for Product Designers[1], by Patrick André and Kenneth Wyatt, with foreword by Henry Ott.

 

Image may be NSFW.
Clik here to view.
Fig1-01 TS Kit

 

 

 

 

 

Figure 1 – Troubleshooting kit with most of the major components shown. The spectrum analyzer is the Thurlby Thandar model PSA6005 and tunes from 10 MHz to 6 GHz. Everything fits inside a Pelican 1514 roller transit case. The contents are described in Part 1 and Part 2 of this article.

Spectrum Analyzers

You’re probably wondering about the spectrum analyzer, so we’ll start with that first. The spectrum analyzer is the one piece of gear that’s essential for EMI troubleshooting, but has traditionally been the most expensive item in anyone’s kit. Many smaller or mid-sized companies may not have the budget to purchase a lab quality analyzer, which can start at a base price of $10k, or more. While you may find older used spectrum analyzers on sites, such as eBay or from used equipment dealers, several manufacturers are now making lower cost quality instruments that are perfectly adequate for troubleshooting and pre-compliance work. I’ve listed several instruments from which to choose – in categories, good, better, and best.

GOOD – I’ve run into a very low cost spectrum analyzer solution; the Triarchy Technologies (http://triarchytech.com) USB-controlled spectrum analyzer, which is about the size of a large thumb drive (Figure 2). Triarchy makes several models covering up to 12 GHz, but their Model TSA6G1 covers most of the commercial frequency range of 1 MHz to 6.15 GHz, can measure signals from -110 to +30 dBm, and costs just $629 through their eBay store or through their North American distributor, Saelig Electronics (http://www.saelig.com). The unit comes with Windows PC software and works perfectly well for troubleshooting. I wouldn’t necessarily use it for pre-compliance testing, but it should still provide a good enough indication as to whether you’re in the ballpark of passing or failing.

Image may be NSFW.
Clik here to view.
Fig1-02 Triarchy Analyzer

 

 

 

 

 

 

 

 

 

Figure 2 – Here’s an example of a low cost USB powered spectrum analyzer. This one is made by Triarchy Technologies and is sensitive enough for general EMI troubleshooting. The model TSA6G1 tunes from 1 MHz to 6.15 GHz.

BETTER -You may want to consider a better quality analyzer. I’ve been using the Thurlby Thander (TTi) PSA2702T (1 MHz to 2.7 GHz handheld, at $1695) for several years now (Figure 3) and recently upgraded to the model PSA6005. TTi is a British company (http://www.aimtti.com in the UK and http://www.aimtti.us in the U.S.), well known for their lines of test and measurement equipment. Newark (http://www.newark.com) and Saelig Electronics (http://www.saelig.com) are the North American distributors. Many other independent consultants are also using this one. It’s truly handheld and will easily fit into the recommended transit case. TTi also sells a similar handheld model PSA6005 that tunes from 10 MHz to 6 GHz and costs about $2,700.

These analyzers offer most of the usual settings for resolution bandwidth, frequency setting, saving/recall of instrument setups, different detectors, averaging, and max hold. The controls are all laid out at the bottom of the screen in a hierarchical fashion top to bottom. They also include two cursors, which can read out both frequency and amplitude simultaneously. There is also USB connectivity for control by free PC software. Battery life is very good at four to six hours. I can plug in a near field probe directly to the RF input and use the entire unit to quickly evaluate a large system for EMI issues.

Image may be NSFW.
Clik here to view.
Fig1-03 TTi PSA2702

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 3 – The Thurlby Thandar PSA2702T is an affordable portable spectrum analyzer that covers 1 MHz to 2.7 GHz. The cost is just $1,695 from Saelig or Newark Electronics (Photo, courtesy Thurlby Thandar Instruments).

BEST – There are several affordable choices of quality bench top analyzers. My two favorites include the Rigol DSA815 (Figure 4) and Siglent SSA3000X-series (Figure 5). Rigol Electronics, a test & measurement company based in China (http://www.rigolna.com), offers their $1,295 Model DSA815TG (9 kHz to 1.5 GHz) spectrum analyzer with optional tracking generator ($200). The extra EMI option ($599) will give you the three EMI resolution bandwidths (200 Hz, 120 kHz and 1 MHz) and quasi-peak detector. The front panel is nicely laid out and easy to use. Screen captures may be made for documentation purposes and software is available to control the analyzer from your Windows PC. The unit includes all the usual features of a more expensive lab-grade analyzer, but is accurate enough for all your pre-compliance and troubleshooting needs. Besides the usual controls, you can display up to three traces and six markers. The tracking generator allows you to evaluate filters, antennas, and resonances.

Image may be NSFW.
Clik here to view.
Fig1-04 Rigol DSA815TG

 

 

 

 

 

 

 

 

Figure 4 – The Rigol DSA815 is an affordable spectrum analyzer that covers 9 kHz to 1.5 GHz. The base cost is just $1,295. Rigol also has models in the series that cover up to 7.5 GHz. (Photo, courtesy Rigol Electronics).

Another recent offering from Siglent Technologies (http://www.siglent.com), also based in China, is their SSA3000X-series of low-cost spectrum analyzers. It uses the same compact form factor as the Rigol, but is a little wider to accommodate the wider video display. The base unit tunes from 9 kHz to 2.1 GHz, with another model going to 3.2 GHz. There is a similar EMI and tracking generator option. The control layout is similar to the Rigol and easy to use. Both models offer slightly improved specifications in amplitude accuracy and frequency resolution. The free Windows PC software will also help define limit lines and perform automated pre-compliance testing and documentation.

Image may be NSFW.
Clik here to view.
Fig1-05 Siglent SSA3000X

 

 

 

 

 

 

 

 

 

 

Figure 5 – The new Siglent SSA3000X-series spectrum analyzer tunes from 9 kHz to 2.1 GHz or 3.2 GHz, depending on the model.

Finally, there’s a third option to consider that’s very affordable, considering it has similar specifications as lab quality analyzers. Rohde & Schwarz (https://www.rohde-schwarz.com/us/home_48230.html) recently announced their FPH “Spectrum Rider” portable analyzer, whose base price starts at $5,280 (Figure 6). I also recommend you purchase the built-in preamplifier option for $440. I had a chance to review this and was pleasantly surprised. The unit has much of the functionality as much pricier analyzers, but is a compact battery-operated portable. It will exceed my price total budget for the total troubleshooting kit, though!

The instrument controls are all laid out clearly and I really didn’t need the user guide to start using it. The unit tunes from 5 kHz to 2 GHz, with 3 or 4 GHz as options. While there’s no tracking generator option, the unit does have a lot to offer as far as accuracy and useful features. The battery life is rated at eight hours and the unit is moisture proof with an easy to read display, even in full sunlight. Perfect for field use!

Image may be NSFW.
Clik here to view.
Fig1-06 R&S FPH

 

 

 

 

 

 

 

 

 

 

 

Figure 6 – The Rohde & Schwarz FPH “Spectrum Rider” portable spectrum analyzer tunes from 5 kHz to 2 GHz or an optional upper limit of 3 or 4 GHz as budgets allow. The specifications are very similar to much higher-priced lab quality analyzers and the base price is just $5,280.

Any of these analyzers should do well for you, but my preference (if traveling light) remains the TTi PSA2702T because it’s fast to use and fits so well into the transit case, avoiding my carrying a second piece of gear. The advantage of the Rigol or Siglent analyzers is that they are more accurate than the TTi PSA2702T and include a preamp, tracking generator, the EMI bandwidths and quasi-peak detector. However, for the base price, they’re limited to just 1.5 or 2.2 GHz respectively. The tracking generators are a valuable troubleshooting tool for determining resonances and filter responses. Of course, both Rigol and Siglent have models that go higher in frequency (7.5 and 3.2 GHz, respectively) for additional cost. The Rohde & Schwarz analyzer has even better specifications and is battery-powered, but has no means to add a tracking generator.

Real Time Spectrum Analyzers

If your products include wireless or fast serial data streams, you might wish to consider one of these affordable real-time spectrum analyzers. A real-time spectrum analyzer has the ability to capture brief intermittent signals and are perfect for capturing modulated wireless or digital signals, as well as general EMI troubleshooting. Low-cost examples might include the Tektronix (http://www.tek.com) RSA306 (Figure 7) or Signal Hound (http://signalhound.com) BB60C (Figure 8). Both include feature-rich PC software. Either model should fit nicely into the transit case, as both are relatively small.

For a more detailed review of these two analyzers, as well as several other lab-quality models, be sure to download the new 2016 Real-Time Spectrum Analyzer Mini Guide[2] from Interference Technology.

Image may be NSFW.
Clik here to view.
Fig1-07 RSA306

 

 

 

 

 

 

 

 

Figure 7 – The Tektronix RSA306 USB-controlled real-time spectrum analyzer covers 9 kHz to 6.2 GHz and has a real-time bandwidth of 40 MHz. The base cost is $3,489 and there are several digital modulation display options.

 

Image may be NSFW.
Clik here to view.
Fig1-08 Signal Hound BB60C

 

 

 

 

 

 

 

 

Figure 8 – The Signal Hound BB60C USB-controlled real-time spectrum analyzer covers 9 kHz to 6 GHz and has a real-time bandwidth of 27 MHz. The cost is $2,879.

Real-time analyzers can detect and capture very short intermittent pulsed signals. For example, within the 2.4 GHz ISM band, you’ll see the entire spread spectrum Wi-Fi signal, as well as the frequency-hopped Bluetooth signals very clearly. You can even observe multiple Wi-Fi access points on the same channel. This isn’t possible with normal swept-frequency spectrum analyzers. They also commonly include “waterfall” displays of frequency and amplitude versus time – a very powerful troubleshooting tool for intermittent EMI issues.

 

Troubleshooting with Spectrum Analyzers

Typically, we’ll use E-field and H-field probes, clamp-on current probes, or voltage probes with spectrum analyzers. These are described more fully later.

For troubleshooting purposes, it’s also possible to use standard oscilloscope probes with spectrum analyzers. Just make sure any scope probe or E-field probe is capacitively coupled in the signal line (or use a capacitive isolation adapter or DC block at the analyzer input), so that large DC voltages won’t be introduced at the analyzer’s sensitive input. That’s a good way to damage the front-end circuitry. Don’t put much faith in the absolute measurement, as a 10:1 probe connected to a 50-Ohm spectrum analyzer input won’t likely be very accurate. However, you can still measure relative improvements as the troubleshooting process progresses. Rigol Electronics has an application note on how to use an oscilloscope probe with a spectrum analyzer[3]

Near-Field Probes

Near-field probes, or “sniffer” probes, are small electric or magnetic field pickup devices used to determine the source of emissions generated by a circuit or component (Figure 9). The E-field probe is essentially a stub antenna at the end of a coaxial line. An E-field probe can be made by cutting away about 1/4-inch of the outer shield, exposing the center conductor. Insulate the end, so it won’t short to anything. The H-field probe is generally a small loop of coaxial cable made by connecting the center conductor to the outer shield. The size of the stub or loop determines the sensitivity of the probe but can also limit its upper frequency range and its ability to localize the source. These near-field probes are easy to make yourself from regular or semi-rigid coax cables.

Image may be NSFW.
Clik here to view.
Fig1-09 NF Probes

 

 

 

 

 

 

 

 

 

 

 

Figure 9 – A few E-field and H-field probes made from short pieces of semi-rigid or flexible coax cables.

Near-field probes can be either very useful or very misleading. Larger probes, which are more sensitive, can pick up ambient readings from high-powered broadcast radio and TV. One way to determine an individual probe’s sensitivity to ambient signals is to measure the frequency range of 88 to 108 MHz in the FM broadcast radio band. If your favorite station shows up on the oscilloscope or spectrum analyzer, you need to be careful to ignore ambient signals. To do this, move the probe away from the unit and power down the unit if possible. If the signal does not go away, you should ignore that particular frequency as an ambient.

H-field probes couple best when oriented in the same plane as the wire, cable, or circuit trace because this allows the most H-field lines of flux to penetrate through the loop (Figure 10). The larger loop probes will be the most sensitive, but not as high a spatial resolution as the smaller loop probes. The smallest probes can trace RF noise currents to a single trace or integrated circuit pin.

Image may be NSFW.
Clik here to view.
Fig1-10 H-Field

 

 

 

 

 

 

 

 

 

 

 

Figure 10 – Proper positioning of an H-field probe for maximum coupling.

Most H-field loop probes are shielded for E-fields, but the capacitance between the shielding and circuit being measured adds a parasitic capacitance that can cause a high-frequency resonance (about 700 to 1,000 MHz, depending on the probe design). By constructing an unshielded loop you can avoid this resonance, but then you also sacrifice rejection of E-fields.

Because most circuit traces are low impedance, and are therefore relatively high current structures, H-fields tend to dominate in digital products. We tend to use H-field probes to locate “hot” signal sources on cables or circuit traces (Figure 11). By carefully sweeping the probe around on the circuit board and interior cables, areas of high emissions can be located. On the other hand, E-field probes are most useful for detecting leakage in chassis seams or gaps, where there might be high levels of E-fields.

Image may be NSFW.
Clik here to view.
Fig1-11 Probing Board

 

 

 

 

 

 

 

 

 

 

Figure 11 – Using an H-field probe to locate hot spots on a circuit board. For higher resolution measurements, smaller probes should be used.

You need to be careful when mapping out “hot spots” of RF energy. Just because you measure a high field level in a certain part of the circuit board or cable, does not necessarily mean that energy will be coupled out and radiate. It all depends on whether there is a coupling path from the RF energy source to some “antenna-like” structure, such as an I/O or power cable. Generally, near field probes are good for identifying potential emission sources, but I rely on nearby antennas to troubleshoot actual emissions from a product.

If you prefer low cost commercial probes, I can recommend the set from Beehive Electronics (http://beehive-electronics.com) or Tekbox Digital Solutions (http://www.tekbox.net). The Beehive probe set is $300 and you’ll also want to include the 1m long SMB to SMA cable for $50 (Figure 12). Tekbox sells a set of four probes with cable for $200 and the probe set with broadband preamplifier for $330. Saelig Electronics (http://www.saelig.com) is the North American distributor for Tekbox. The Beehive probes may be ordered directly from Beehive Electronics.

Image may be NSFW.
Clik here to view.
Fig1-12 BH Probes

 

 

 

 

 

 

 

 

 

 

Figure 12 – A typical near-field probe set. In this picture, there are three H-field loop probes and one E-field probe. (Courtesy, Beehive Electronics.)

Current Probes

Clamping a current probe around a wire or cable will measure the common-mode RF currents flowing in that wire or cable. They typically use a toroidal core of broadband ferrite or similar material. The frequency range and sensitivity of the probe will depend on the type of material used and the number of turns of wire wound around the core as a pickup. On emission-only probes, a resistive network is often used to control the impedance and flatten the response. This response is known as the correction, transfer impedance, or transducer factor. Similar current probes can be used to inject RF energy into a cable and are called bulk current injection (BCI) probes. They are used for conducted RF immunity tests.

Current probes are very useful as a troubleshooting tool. Measuring current on certain cables can indicate which cables may be the main cause of radiated emissions. The reduction of RF currents on those cables can often reduce the radiated emissions from the equipment under test. Importantly, by knowing the harmonic common-mode current flowing in the cable at a certain frequency you can calculate the expected E-field emission level and compare to the radiated emission limit. In other words, you can predict pass/fail for a particular cable by simply measuring the RF current through that cable. Refer to the article below for details.

It’s possible to make your own current probes from ferrite toroids or clamp-on chokes (Figure 13). I published an earlier article, The HF Current Probe: Theory and Application, on making and using current probes for Interference Technology in the 2012 EMC Directory and Design Guide[4] and would refer you to that resource for more detail.

Image may be NSFW.
Clik here to view.
Fig1-13 DIY Current Probes

 

 

 

 

 

 

 

 

Figure 13 – An example of simple current probes you can make to measure harmonic RF currents in cables.

The advantage of commercial current probes is that they easily clamp around the wire or cable to be evaluated and they are calibrated to accurately read RF current. While the Fischer model F-33-1 probes (http://fischercc.com) are used as an example (see Figure 14), there are many other good manufacturers of current probes, such as Pearson, Rohde & Schwarz, Teseq, Solar, and ETS-Lindgren.

Image may be NSFW.
Clik here to view.
Fig1-14 F-33 Probes

 

 

 

 

 

 

 

 

 

 

Figure 14 – A matched set of clamp-on Fischer Custom Communications model F-33-1 current probes. While not imperative to purchase, a matched set is very useful for advanced troubleshooting I/O cable emissions. They can sense RF currents of a few microamps.

Antennas

EMI antennas can be very expensive, so I recommend smaller, low-cost antennas, such as the rabbit ears TV antenna still available in some TV and electronic parts stores (Figure 15). UHF TV “bowtie” antennas also work well from 300 to 800 MHz. They will perform just fine for troubleshooting purposes.

Remember, EMC troubleshooting relies more on relative changes, rather than absolute changes. For example, if you know you’re product is failing by 4 dB, reducing the problem harmonic by 10 dB at your own facility, as measured with a nearby antenna, should provide a reasonable assurance of passing.

Image may be NSFW.
Clik here to view.
Fig1-15 Rabbit Ears Antenna

 

 

 

 

 

 

Figure 15 Simple rabbit ears TV antennas may be used to pick up radiated emissions from a product under test. It will tune from 85 to about 220 MHz depending on how long the elements are extended. Epoxy a BNC connector to the housing and connect each terminal to the telescoping elements.

Also available are low-cost (under $30) PC board broadband log-periodic antennas from Kent Electronics (www.wa5vjb.com) in Figure 16. These are designed for several frequency bands, starting at 400 MHz and have about a 6 dB gain across the band. They work well for general troubleshooting and are what I currently use. Being flat, they fit easily into the transit case.

Image may be NSFW.
Clik here to view.
Fig1-16 PCB Antennas

 

 

 

 

 

 

 

 

 

 

 

 

Figure 16 These PC board log-periodic antennas are low in cost and are resonant in several bands from 400 MHz to 11 GHz. They are available from http://www.wa5vjb.com.

I mount mine using a small tabletop photo tripod and a DIY fixture made from PVC pipe (Figure 17). I tapped and threaded the 90-degree coupling to fit the tripod and used a handsaw to cut a narrow slit in the other end. The PC board just presses into the slot. I left the horizontal piece unglued, so I can rotate the antenna for horizontal or vertical polarization.

Image may be NSFW.
Clik here to view.
Fig1-17 PCB Antenna on Tripod

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 17 – One of the PC board log-periodic antennas mounted to a tabletop camera tripod. By setting this up near the product under test, the emissions may be observed during troubleshooting.

For a little more cost, I also like the small active broadband antenna from Aaronia AG (Figure 18). Their model BicoLOG 30100X covers 30 to 1000 MHz and includes a battery-operated broadband preamplifier. The cost is just $1,299 and may be ordered directly from Aaronia AG in Germany (http://www.aaronia.com/products/antennas/BicoLOG-30100-X/) or their North American distributors, Kaltman Creations (http://kaltmancreationsllc.com) or Saelig Electronics (http://www.saelig.com). Aaronia also has compact antennas that tune from 700 MHz to 6 GHz.

Image may be NSFW.
Clik here to view.
Fig1-18 Aaronia Bicon

 

 

 

 

 

 

 

 

 

 

 

Figure 18 – A small broadband active antenna from Aaronia that is very useful for bench top troubleshooting of radiated emissions.

Whether an antenna is resonant at the frequency harmonic of concern is not that important. So long as you can observe the RF harmonics at a distance of 1m, or more, the troubleshooting process can start. Figure 19 shows my typical setup for evaluating and troubleshooting radiated emissions. By monitoring the spectrum analyzer as you try various fixes, you can see immediately whether progress has been made. The best part is that you can do this testing right at your lab bench.

 

Image may be NSFW.
Clik here to view.
Fig1-19 RE Setup

 

 

 

 

 

 

 

 

 

Figure 19 – The typical setup used to troubleshoot radiated emissions. Position the antenna and spectrum analyzer about 1m away from the product under test so you can observe progress in real time.

Ferrite Cores and Chokes

RF currents on cables (and associated radiated emissions) may usually – but not always – be reduced by clamping a ferrite choke around the I/O or power cable nearest the source of RF noise. Adding a few of these chokes in various sizes to your kit would be helpful for troubleshooting (Figure 20). It’s sometimes best to use a large (2.4 inch) toroid ferrite core of 31, or similar, material with multiple turns through it for use in frequencies below 30 MHz. This is a common cure for interference to (or from) consumer equipment.  Most common beads and clamp-on ferrites are generally more effective at frequencies in the 100s of MHz, unless the ferrite material is specifically designed for lower frequencies.

Image may be NSFW.
Clik here to view.
Fig1-20 Ferrite Chokes

 

 

 

 

 

 

 

 

 

 

Figure 20 – Examples of various clamp-on ferrite chokes.

Miscellaneous

Adhesive copper tape is also useful for sealing enclosure joints temporarily during troubleshooting. Rolls of this tape may be purchased from electronics distributors at $30, or more, per roll. I’ve also found that “snail tape” (under $10) used in gardening may be substituted. This may be found in garden stores or on Amazon. Take care not to cut yourself on the sharp edges.

Aluminum foil is also handy as a troubleshooting tool for wrapping around an interfering product to assess whether additional shielding might help. Note that aluminum foil is not as effective at power line frequencies.

Finally, a selection of capacitors, resistors, inductors, and common-mode chokes is useful for applying filtering to I/O, microphone, and power line cables.

 

For more information, feel free to check my web site at http://www.emc-seminars.com, my EMC blog at http://www.design-4-emc.com, or Interference Technology at http://www.interferencetechnology.com.

 Kenneth Wyatt is an EMC consultant and senior technical editor for Interference Technology. He may be reached at ken@emc-seminars.com for consultation or kwyatt@interferencetechnology.com for editorial questions.

 

[1] EMI Troubleshooting Cookbook for Product Designers is available from Amazon and Stylus Publishing in the U.S., and from The IET in Europe. Go to http://www.emc-seminars.com or http://www.anderconsulting.com for specific links.

[2] Download the 2016 Real-Time Spectrum Analyzer Mini Guide, from Interference Technology here: http://itemmedia.wufoo.com/forms/p17royzx0hl32fe/.

[3] How To Use A Probe with a Spectrum Analyzer: http://www.rigolna.com/products/spectrum-analyzers/dsa800/dsa815/

[4] The HF Current Probe: Theory and Application: http://www.interferencetechnology.com/the-hf-current-probe-theory-and-application/.

 

 

 

The post Assembling A Low Cost EMI Troubleshooting Kit – Part 1 (Radiated Emissions) appeared first on Interference Technology.

Automotive EMC

Introduction  The rapid development of the automotive industry and the trend towards autonomous vehicles and ADAS systems continue to drive the need for more sophisticated EMC design and test scenarios for the automotive industry. Vehicle platforms become increasingly much more complex with electronic devices that […]

The post Automotive EMC appeared first on Interference Technology.

Overview of the DO-160 Standard

In aerospace, there is one standard that always seems to be popping up, DO-160. Aircraft suppliers are often complying with aviation authorities’ regulations by testing their product to DO-160. DO-160 is a standard that was published by the industry group […]

The post Overview of the DO-160 Standard appeared first on Interference Technology.

Review of MIL-STD-461 CS118 – Electrostatic Discharge

Some Remarkable Highlights at EMC EUROPE 2018 Amsterdam Symposium


THE FINAL COUNTDOWN: EN 60601-1-2 fourth edition

Review of MIL-STD 461 CE106 Conducted Emissions, Antenna Port

The Top-Ten Articles and Blogs as of December 2018 

Introducing Our New Editor-in-Chief: John Blyler

Review of MIL-STD 461 CS117 Lightning Induced Transients, Cables and Power Leads

Viewing all 204 articles
Browse latest View live