PCB Design

Bananas in Rectangles or How to save cost in PCB purchase

In diesem Artikel auf Englisch erklärt Hans Hartmann, Sales Manager DACH von Cadlog GmbH, wie man 10% Kosteneinsparungen bei PCB-Bare-Board-Material erziehlen kann.

Read this, if 10% cost savings in PCB bare-board material would be a significant number to your organization.

This article talks about some cost driving aspects in PCB (Printed Circuit Board). Of cause there are many parameters which drive cost of a PCB. I will focus about optimal usage of a PCB inside a panel. In order to get all readers on the same level, let me explain how a printed circuit board gets handled towards manufacturing and where are the cost drivers that I am talking about.

A PCB-Layout Designer will craft the design work. The result is a single PCB as shown.

But a single PCB cannot run thru the manufacturing machines during Assembly/Soldering/Test. Therefore, we put the single PCB several times into a mostly rectangular frame, we call it the “Assembly Panel”. The XY-size is driven by the possible sizes that the manufacturing machines can handle. Which also might vary from product to product being manufactured in the line. Some manufacturing plants try to keep the panel size in one dimension always same, so that they do not need to change the setup of the machines.

Because machinery needs to handle this “rectangle”, we need additional space at the borders top and bottom. Like say 15mm extra. This extra space is also not fixed and can vary depending how I setup my machine (change in setup is cost and time). The single PCB needs to be cut off the panel after manufacturing, we call this “de-panelization”. Obviously, you need room to have eg. milling-tools or sawing-tools to work. So PCBs inside the “Assembly Panel” must have some distance to each other. Following all the required rules, it is some optimization task to optimally place as many PCBs as possible onto a given rectangular shape. Any area, not occupied by your product is waste. The more wasted materials, the more unnecessary cost which gets imposed to you.

What we will order from the PCB bare board fabricator is this “Assembly Panel”. But the bare board fabricator himself, he has other sizes of raw material. Let’s call it the “Fabrication Panel”. The sizes for “Fabrication Panels” do not follow any norm. So their sizes are significantly different among the PCB Fabricators and depening of PCB Technologies like standard, HDI, rigid-Flex.

See PCB fabricators A, B, C, … along with a database containing all possible panel sizes for each fabricator. Notice, the 370mm X 200mm „Assembly Panel“ will get at best 88.79% of utilizazion when using FAB Q’s panel of net size 383mm X 434mm and we get two Assembly Panels per Fabricator Panel. So 11% of waste with a cost involved!

We can „play“ a little bit with the Assembly Panel size. Reducing the 370mm or reducing the 200mm. Assume we use an Assembly Panel of 355mm x 200mm. Then we recognize a much better utilization of material, 98%. Only 2% of wasted material.

Using FAB A’s available panel of 605mm X 355mm, then we could get three Assembly Panel into one Fabricator Panel, while utilization would be 98%. Obviously, we are interested in high utilization! It saves cost and having less wasted copper, acids, epoxy and other hazard materials going back into the cycle makes feel a bit greener.

Now the question remains, can we design an Assembly Panel within a smaller given size of 355mm x 200mm. The answer is „yes“.

Just turn one of the single PCBs and it will fit on the smaller given size. Now this type of optimization was simple. A human being could easily see, what will be a better positioning. But what if you have complex shapes or a multi-project panel. How to make the best fit. This is where software comes into the game to make a best fit of a „Banana in a Rectangle“.

Optimization tasks:

How do I fit as many “Banana Shaped” PCBs onto a given Assembly Panel?

What size of Assembly Panel will bring me what Utilization at Fabricators A, B, C, …

What combination of all of this gives me least cost?

An optimization task, worth to be done if you are manufacturing your products in high volume. Please bear with me, that I cannot write all the details or recommend particular software tools how to carry out this type of optimization. If you like to talk this topic with me, please do so.

Leggi tutto...
Luca ValzaniaBananas in Rectangles or How to save cost in PCB purchase

DDRx Memory Verification in PCB Design

In diesem Artikel auf Englisch spricht Hans Hartmann, Sales Manager DACH von Cadlog GmbH, über DDRx Memory Verification im PCB-Design.

When doing a printed circuit board (PCB) design with DDRx memory, I observe that designers often only take care for impedance planning and length-matching.

In other article (Crosstalk matters) I had explained that Crosstalk can significantly change signal timing, so it might be wise to use a Signal Integrity (SI) planning tool for proper planning for “clearance rules” and “parallelism rules” to avoid too much of crosstalk. The term “parallelism” rule means, DRC rules for a PCB editor, to check for too long routing in parallel before reporting a DRC violation. However, by far not all PCB editors in the market can take care for such a rule.

In this article (Length Matching) I had explained that sometimes “length matching” is not sufficient and that one could use “TOF / Time Of Flight matching”. That is the case when you use routing layers with significantly different propagations delays (which essentially in dependent of surrounding dielectric properties of that layer). Again, not all PCB-Editors in market can take care for that. But still this might not be enough to do good DDRx design. Check this out.

Can you tell, whether or not this DQ signal in relation to DQS is a good one?

What are you seeing in this picture. The green curve is a differential-DQS (DQ Strobe) signal. It defines a reference point for timing measurement.

The yellow curve is a single-ended DQ data line. It looks good. Ok, has some overshoot, a little bit of ringing, but it looks monotone around certain thresholds. So Setup/Hold Times for DQ in respect to DQS is measured as shown in the pictures.

Let us look at some timing charts, for example DDR3 data sheets like (**1).

What we can read from such datasheet, essentially, a rising DQ signal must pass beyond a threshold VIH(AC) …it must stay over this threshold for a minimum amount of time tVAC… if this condition is meet, then we can measure a SETUP-time (twds DQS), if not, we have already a violation of timing.

… then DQ signal is maybe dropping below VIH(DC)… if that point is meet, we can derive a HOLD-time (or if not, then have sufficient enough HOLD-time).

This measured setup/hold values need to be validate against setup/hold requirements from datasheet tables. But hey, it is not yet as simple as that. Now think a tangent line from Vref to VIH(AC) as shown, this gives us a “Nominal Slew rate”. With this “Nominal Slew” rate we are going into the “derating tables” of the memory’s datasheet and derive positive or negative values to add to your datasheet required SETUP and HOLD times. This is called “Derating”. Only against those derated values, we have to compare the measured/simulated values.

This gets even more „complicated“ when the DQ signal is not monotonic between Vref to VIH(AC). It will mean, we have to derive the „slew rate“ with a different tangent line as shown.

You see the complexity of signal thresholds and timing that must be meet, in order to guarantee proper operation.

Nobody can tell me to see all this by just watching a yellow and green curve, which anyway is just one curve of thousands of cycle that a Signal Integrity simulator is analyzing in order to find a worst-case scenario. This in combination with Crosstalk consideration and varying voltage Level, maybe even with parameter sweep of the PCB-stackup.

To be frank, many people to whom I am explaining what modern signal integrity tools can automatically validate, didn’t know that a DQS/DQ, DQS/CK or CK/Address/Command lines have such complex relationship to be meet.

Fortunately, our real Hardware is quite robust and tolerant. But if you want to drive your DDRx interface to maximum Performance, having it robust, you should do simulation. Simulation will tell you what margin’s you have in your design. You might what to use a Signal Integrity tool to make this sort of automated analysis. I am not saying it is trivial to setup, but there is software in the market which helps a lot thru what they call “DDRx Wizard”.

I am experienced to use HyperLynx from Mentor a SIEMENS Business. It brings me such a DDRx Wizard. In design data that were ready to order, I had once found “sub-optimal” implemented routing in a DDR3 design within two days of extra work for Setup and simulation and that saved a re-design, the cost, but moreover weeks of time.

I also explain what design mistake got found. DQS and DQ signals are routed in bundles, which need to meet matching conditions in length/time. Now a certain trace width got selected to meet impedance requirements. But routes were going into a fine pitch BGA area. Then the designer decided to narrow down the trace width in the fine pitch area. Those narrow segments have of cause slightly different electrical characteristic. Two DQ signals got overseen for some reason and had longer „narrowed“ length. In the automated verification, it came out that those signals failed in setup/hold margin compared to all the others. With this information, we reviewed the layout and found and eliminated the problem. If this had gone into production, possibly in hardware bring up all might have worked well. But in the field, maybe on a Monday, sun-shine, 70degC ambient … it might have spontaneously failed.



(**1) www.micron.com, public datasheet of DDR3 low-power SDRAM

(**) HyperLynx Software

Leggi tutto...
Luca ValzaniaDDRx Memory Verification in PCB Design

Why crosstalk matters in PCB Design

In diesem Artikel auf Englisch spricht Hans Hartmann, Sales Manager DACH von Cadlog GmbH, über die Wichtigkeit von Crosstalk im PCB-Design.

Traces on a printed circuit board (PCB) that are routed close together will interfere with each other. Virtually any trace on a PCB interferes with any other trace, but often at a level that we can neglect. We call this crosstalk. Depending on the spatial relationship between traces, they have a coupling inductance and a coupling capacitance. So any trace on a PCB can act as an „aggressor“ signal to other „victim“ signals.

In the general case, analysis software can consider any amount of trace segments coupling with each other. For sake of simplicity, in following pictures, we consider only two traces routed in parallel. Traces routed in parallel will form a “coupling capacitance” and a “coupling inductance”. Both of which will depend

  • distance of the traces and dielectric environment
  • length/distance at which traces are routed
  • the closer together, the longer in parallel, the more coupling

Fieldsolver tools will help us to understand the details and create needfull parameters for simulation.

The strength of crosstalk (signal amplitude of crosstalk noise) is dependent on

  • the closer traces come together, the longer that they are routed in parallel -> ++ crosstalk
  • the more coupling -> ++ crosstalk
  • the more dV/dt of the aggressor signal, i.e. the faster rise/fall time -> ++ crosstalk

Please note, that amplitude of crosstalk depends also on rise/fall times of aggressor signals. Which means, the same PCB layout might perform better or worse, when you changes the ICs or on programmable pins, change the slew rate/drive strength of IC pins.

How can crosstalk harm our signal?

Obviously, an aggressor could impose a disturbing voltage-level onto a victim signal. Like an  aggressive digital signal, imposing noise on a sensitive analog signal. Likewise, a “slower” but high-voltage signal, can cause significant crosstalk into any other signal.

But often, the level of crosstalk noise will have a more non obvious impact.

It changes the timing of our signals!  

“Timing of a signal” is always measured from a reference point, like when a signal crosses a certain voltage level. If crosstalk noise slightly impacts the voltage levels of our victim signals, obviously they will reach sooner or later those thresholds, hence appear “slower” or “faster” in relation to a clock signal for example. This can change setup/hold requirements significantly!

In order to analyze what can happen, we can extract from PCB-layout design or create by manual drawing a simulation setup reflecting two traces routed 4cm in total, but 2cm are routed in close distance and in parallel (trace width = 130um, trace-2-trace gap = 260um) so that they have relevant coupling. It is not needed to share more details than that for now.

Now let us do a simulation of both signals switching from low to high. We simulate without the coupling and we simulate in consideration of coupling. The green curve is the signal in isolation, the blue curve is the signal in consideration of coupling.

We observe, that a VinH level at 1.7V is reached later in time. Actually 7ps later. Now there is a rule of thumb, that with typical PCB material, we have about 15cm/ns (169ps/inch). Which means our 7ps are equivalent to 1mm of trace length! “One Millimeter”. Think about it. Often we specify length matching to require 100um or 0.5mm that all traces should have same length and if we do now a mistake in the design and have such an amount of crosstalk.

Think further to a bus routing, we have several signal going for longer distance in parallel and in worst case scenario, this amount of crosstalk to any victim can be even more and the impact on timing even worse.

When you are doing designs like with DDR memories. You might what to use a Signal Integrity tool to make this sort of analysis. There are several such tools in the market!

I am used to work with HyperLynx from Mentor a SIEMENS Business. It makes the extraction of PCB-Layout data very easy and gives me a “DDRx-Wizard” to validate DDRx memory interfaces. I will talk in another article why you should use such software tools to validate DDRx-Designs. Because DDRx signals are different. It is not only that DDRx signals need to reach a certain threshold, they need to follow a much more complex condition around the threshold level. Stay tuned for that article.

Leggi tutto...
Luca ValzaniaWhy crosstalk matters in PCB Design

Beyond SPICE – Analog/Mixed Signal Simulation

In diesem Artikel auf Englisch erklärt Hans Hartmann, Sales Manager DACH von Cadlog GmbH, warum es Sinn macht, in der Schaltungssimulation nicht nur SPICE zu verwenden.

Many electronics developers are very familiar in doing circuit simulation using SPICE (Simulation Program with Integrated Circuit Emphasis) models, netlist and a SPICE simulator tool. There are quite some tools in the market, even some of them free of charge. In this article, I like to inspire a little bit to go beyond using only SPICE in circuit simulation and do AMS – Analog/Mixed Signal simulation.

Now, what is AMS?

A simple, yet striking example will be a H-Bridge circuitry driving an electrical motor. The H-Bridge itself being driven by more or less complex digital PWM signal (clk1, clk2) coming from say a Microcontroller or an FPGA. See a schematic of such a setup.

Essentially, we have several domains. The PWM might be described by some piece of C-language code running inside a Microcontroller or as shown in the example in VHDL. The discrete components, such as the power transistors and diode could be easily represented by SPICE models.

How could be model the DC-Motor? A DC-Motor is a device having two terminals, thru which is flowing an electrical current and across which is an electrical voltage. Depending on the motor’s characteristic equations, current is turned into a torque at the motor shaft and an angular velocity. If you are familiar with SPICE, you will agree, this is not so straight forward to model a DC-motor using SPICE notation. Even more, the DC-Motor shaft has some „load“ attached to it. So we could be interessted to model the behavior of a moment of inertia. In the particular example, the shaft shall have a min. and max. position and when motor turns the shaft into min/max positions, we like to mimic in the simulation a stronger counter force. All this is too much for SPICE notation, or say better, SPICE wasn’t developed to descibe all of this in any easy way. However, the language VHDL-AMS was designed to deal with such use case. VHDL-AMS is an extension of the VHDL that FPGA and IC-Designer are used to work with. It got extended to described analog behavior. See the Spice or VHDL-AMS code behind some symbols.

Transistor modelled in SPICE.

PWM Signal in this case modelled like a Clock in VHDL

DC-Motor modelled in VHDL-AMS by it system equations

Let me show three fundamental examples and a more complex one. A Resistor, a Capacitor, an Inductor and a Fuse. While in SPICE, essentially R, L and C and primitives, they aren’t primitives in VHDL-AMS but described by their system equations.

R, L, C have two terminals and a physical quantity (electrical current) THROUGH the terminals and a physical quantity (electrical voltage) ACROSS the terminals. And with this, R, L, C is described like this.

Resistor fulfills Ohm‘ law: Voltage = Resistance * Current

Here the code of an Inductor: Voltage = Inductance * dI/dt

Likewise a Capacitor described by equations: Voltage = C * dI/dt

Now let us see a „fuse“. A „fuse“ is not a primitive component in SPICE and you can search the web for even PhD works about how to model the behavior of a fuse in SPICE by using a mix of controlled current and voltage sources and those alike primitives of SPICE. Here is the code how a fuse could be modelled in VHDL-AMS and describing its thermal behavior until melting point. This is just one example to model a fuse.

Coming back to the H-Bridge driving a DC-Motor. Now that we modelled all relevant parts, if we do a simulation, we might look into time-domain plots. Putting all together, you see simulation results being a true mix of digital and analog electrical signals along with other physical quantities such a torque in NM or the angle of the shaft in radians.

Now you understand better what is meant by „Analog Mixed Signal“.

Such a AMS Tool, like PADS AMS, comes with a library. The library brings a lot more models of functionality like Filters, Pumps, Gears. But moreover, you can also find in the web VHDL-AMS model libraries. Search for example the Web with key such as VHDL AMS, Automotive and you will find plenty of resources.

I find it pretty cool and like to mention that there is also a free cloud based version available for this technology. AMS Cloud



SPICE = (Simulation Program with Integrated Circuit Emphasis)

FPGA = Field Programmable Gate Array

VHDL = VHSIC Hardware Description Language

VHSIC = Very High-Speed Integrated Circuit



(1) Mentor A SIEMENS Business, Users Manual of „PADS AMS“ Software and example libraries therein. https://www.pads.com/analog-mixed-signal/

Leggi tutto...
Luca ValzaniaBeyond SPICE – Analog/Mixed Signal Simulation

Don’t forget the Via- and Pin-Length when trace-length matching

In diesem Artikel auf Englisch erklärt Hans Hartmann, Sales Manager DACH von Cadlog GmbH, einige wichtige Aspekte beim Trace-Length Matching.

Still many PCB-Layout editors in the market do not support the length thru a Via connection when matching a group of signals for same length (or for same timing, which by far many PCB-Editors can’t do at all). I must say, this also is not necessary for all designs. But as your signals are getting faster and your requirements for length/time matching getting tigther, you should not overlook this detail.

For my article, I like to use the PCB Layout data of the „Beagle Bone Black“ design. It is a design of an ARM-based micro-controller along with DDR3 memory. The design data is available on the web-site (*1). I have imported that design into my PCB-Tool and demonstrate a couple of things related to length matching about the DDR3 address lines.

I make some simplifications to the design rules, just for the purpose of this article. So I am just giving a design constraint that all DDR3 address lines shall have the same length within a tolerance of 250um (~10 mils) between longest and shortest. In a correct setup, I would have setup Address, Command, Clock requirements. Such a constraint could be setup in a constraint manager tool, filling the table. The picture shows a „Constraint Class“ which got called „DDR3_ADR“, to which I had assigned all address lines and then given the rule to match within 0.25mm. I could have setup alternatively to match a required time tolerance.

Just for purpose of demonstration, I setup my PCB-Editor in the WRONG way and show behavior of many other PCB-Editors in the field.

I want to analyze the routing of the DDR3 bus bundle. The general routing overview is shown here.

With this wrong setup, the interactive DRC (Design Rule Check) indicates that there is no violation in length matching, so all traces from longest to shortest do match within 0.25mm and all is marked as „Tuned“. Those 0.25mm were possibly the original intent for this design done in another tool.

This actual trace-length information from layout can also be brought back into the „Constraints Manager“ so that a design-engineer working on the schematic, but not having a PCB-Editor could also review it. So what we are seeing is that DDR_A15 is the longest track and the shortest one is DDR_A4 being 0.222mm shorter.

Again, I prepared for demonstration purpose, the PCB-Editor in a way that it behaves as many other PCB-Editor. But now, let us do it more precise.

The routing of the address lines was done using Inner and Outer layers. So Top, Bottom, Layer 3 and Layer 5 got used in a mix. I am showing here only a 3D view of A0 and A1 address lines. A0 used Top, Layer-5, Layer-3 routing. A1 used Top and Layer-3 routes. Hence both signals take a significantly different length thru the involved via connections. A0 use one via more and in two via, the signal travels almost the hole via-length. Remember, the board thickness is about 1.6mm.

Again, I prepared for demonstration purpose, the PCB-Editor in a way that it behaves as many other PCB-Editor. But now, let us do it more precise.

Now I setup the PCB-Editor to take care for via-length calculation and consideration. And -no surprise- now a different net is the longest one and we do not meet anymore our length-matching requirements! We miss it in one case by ~2.7mm! So we detect that the design could still get some improvement to add tuning meander, which in the tool that I am using could be done automatically.

This information can also be loaded into the „Constraints Manager“.

There is also another and different view to this topic. The reason for length-matching is, because we want all signals to have about the same arrival time in respect to a clock-signal. So essentially, what we are asking is that all address lines have the same „flight time“ (or TOF = Time Of Flight). But this requirement of TOF-matching is often turned into a length-matching requirement, even though I had explained in this article (Inner vs Outer Layer Routing), that eg. 1cm of routing on Top-Layer is very different in timing versus 1cm on Layer-3. You could have done this, if you had decided to route all address lines on a single eg. inner layer (except for the fanout, which has to be on Top).

[added Rev.2]

After the first version of this article, Antoon send a comment to not forget about „pin package length/delay“. So let us talk about it too. An IC is some semiconductor die inside a package. Obviously there is a connection between the contact pad on the die and the solder pad of the package. Think about a „bond wire“ but the connection can be very different depending on the package construction. This connection has a characteristic that can be expressed as a „length“ or as a „flight time“ which shall be considered when routing traces on the PCB. Here is how a PCB-Editor (if it supports that sort of rules) can catch such values in the „Constraints Manager“.

The 6300-0002 is the part number of the Microprocessor that was used. We are seeing all the BGA package pins.

DDR_A0 is connected to BGA ball „F3“. In the moment of writting this article, unfortunately I couldn’t access the information about package internal length for this Microprocessor. So just as an example, I am specifying 1.3mm.

In general, you might get from the component manufacturer this information as a length or a delay for each pin. Indeed, depending on the product and the package size, it might be that chip internal length can be „large“. In the past I have seen 16mm. I went to the web-site of an FPGA manufacturer to review some values. I saw values even upto in the range of 200ps (~3cm trace length equivalent) – indeed significant values, as Antoon had commented. Watch out if your IC specifies very different internal length/delay for the pins on a bus-interface .

Following picture is again the interactive DRC and timing check. The tools calculates us all the contributors for „length“ or „flight time“, i.e. the traces, the Via and the chip internal length. In this case shown, DDR_A0 is now reported at length 28.10903, which is the specified 1.3mm longer than before. Now we can get active and make the tuning in consideration of all relevant parameters.


If you are routing high-speed critical signals, consider what your PCB-Editor is truly able to do for you. I am experienced with „PADS Professional“ and made the screenshots with that tool.

As you can see for this choosen example. This is a working product and hardware is quite tolerant. But if you drive your DDRx memory interfaces towards their bandwidth limits, you must consider the concepts explained here and possibly use a Signal-Integrity analysis tool to consider even more relevant effects for signal-quality and -timing (i.e. „Signal Integrity“) that a PCB-Layout editor can not take care of. Examples would be Crosstalk or stub-length in a Via and it’s relevance to disturb Signal Integrity.


(1) https://beagleboard.org/black

(2) Screenshots taken from „PADS Professional/Xpedition“ software from Mentor, a SIEMENS business, www.pads.com/professional

Leggi tutto...
Luca ValzaniaDon’t forget the Via- and Pin-Length when trace-length matching

Is length-matching all when routing high-speed signal bundles on a PCB?

Many PCB layout designs are done in a way that bundles of high-speed signal traces like in an address or data bus of a memory system are „length matched“. But there is more to be considered. Wheather or not important, will depend on system requirements. Some aspects are demonstrated in this presentation considering the difference in using inner- versus outer-layers for routing.

Leggi tutto...
Luca ValzaniaIs length-matching all when routing high-speed signal bundles on a PCB?

DFF – Design For Fabrication – How to check a solder mask?

In diesem Artikel auf Englisch erklärt Hans Hartmann, Sales Manager DACH von Cadlog GmbH, wie man eine Lötmaske überprüft.

Recently I was asked what software tools I recommend for PCB (Printed Circuit Board) solder mask checks. So I wanted to write a small article about „DFF“ (Design For Fabrication) checks.

„DFF“ shall mean, we want to follow rules to ensure, that the PCB fabricator doesn’t have problems in manufacturing.

„DFM“ (Design for Manufacturing) shall mean, we want to follow rules to ensure, that during PCB assembly and test, we don’t have problems in manufacturing.

Is this obvious?

Every PCB-Layout design is made, because we want to manufacture it later on.

Is this still obvious?

We do not want problems during manufacturing, because this will cost money.

I expect everyone agrees to both above statements, nevertheless, I am seeing many PCB-Designers who do not make use of DFF or DFM checking methods. Of course the more experienced a PCB-Designer is, the more he knows what to look for in his own designs. Nevertheless, you have so many rules to follow, it might be wise to consider software assistance. But let us first review the kind of problems I am talking about. Let me make simple, yet striking example – about solder mask (SM).

A solder mask or solder resist is a coating layer on the surfaces of our PCBs. It gives us protection against corrosion, improves dielectric strength. Of course, on top of solder pads, we don’t want a solder resist, so we have there a “SMO” Solder Mask Opening.

A solder mask has to fulfill a couple of design rules as you can see in the following illustration which I found in [1]. In orange color we have the copper and the solder mask is in green.

  • SM must have some „clearance“ against solder-pad (eg. 50um)
  • SM over traces must have some minimum cover, otherwise might detach (eg. 100um)
  • Between SMO, you will create „Bridge“ which has to have a minimum size (eg. 80um)

Please review those values from the design rules that your fabricator specifies.

What could go wrong?

If such rules are violated in your designs, it can mean that SM might not properly manufacture which can impact quality of your soldering process during assembly. For example, if a „bridge“ is too small, small slivers of solder resist can detach and in worst case stay on top of a solder pad surface. This pad then cannot be soldered properly anymore.

Now let us see what else can happen due to the solder mask misalignment against the copper image. Why is there a misalignment between copper image and solder mask at all? Recall, that PCBs get fabricated on larger „fabrication panels“, eg. 500mm x 500mm filled up with your „assembly panels“ which themselves carry one or more of your single PCB.

Now recall how the multi-layer PCB is treated until it is covered with solder resist. During putting the „cores“ and „laminates“ together, they are treated with eg. 15 bar of pressure. Do you think, the conductive pattern which you designed in your PCB tool is exactly at the coordinates that you specified – of course not. Ask your fabricator about this parameter, but assume on a distance of 10cm, maybe you have a position accuracy of +/- 10um. Now think on the panel level, it could mean your „conductive pattern“ are dispositioned by maybe 50um from the origin – on panel level.

The solder mask might be manufactured by photo plot using a film. The film itself has a different expansion coefficient compared to the PCB, so that is why there might be a misalignment between your “conductive pattern” and your „solder mask pattern“ and obviously this misalignment is not the same for the various „assembly panels“.

I did a small animation about such misalignment between copper and solder mask. The copper is in red, the solder mask opening is green and „gold“ for areas where we have a solder mask opening on top of copper (means areas of unprotected copper). The solder mask oversize is 75um.

I am showing a disposition of SM against conductive pattern by 75um in possibly any direction. Check your fabricators specification for „Conductive pattern to solder resist“, possibly you will see values of 50-100um. To keep the animation short, I just showed a misalignment in few directions. As you see, the solder mask opening on top of a trace right next to a solder pad. What do you think might happen during soldering? Solder bridges between pad and trace might be the result, which costs the effort and time of repair. But maybe, the assembly plant analyzes this failure and because they cannot change the PCB design data anymore, maybe they do some „tuning“ on the solder paste definition.

Of cause there are many more design rules which PCB fabricator specifies to us, like all kinds of minimum object to object clearances. A PCB Designer should learn what is the reason for such design rules. If you understand the reason, driven by manufacturability and why the rule exist, then you will understand why the PCB fabricator has an interest that you follow the rule or even worse, why the CAM operator at the fabricator might change your GERBER data in a way, that he can run smoothly thru his production to optimize his profit!

I am sorry to tell you this, PCB fabricators do sometimes change the GERBER data in a way, that it even can change the electrical performance of your product. Of cause, you can bind them by contract to tell you about all the changes they recommend to do and why they are needed. At least you should compare the „manipulated“ manufacturing data against the data that you had sent.

The bad thing with all this is. Maybe you have two different fabricators A and B for prototypes and series production and the CAM operators at A and B make recommend and apply different changes. I am just reporting to what my clients told me has happened to them. A prototype supplier under time pressure might just take decissions.

A few examples of what changes are possibly made.

  • Maybe you had too thin traces in your design. CAM operator might widen them.
  • Maybe you violated minimum sizes of annular rings. CAM operator might enlarge them.
  • Maybe you have slivers in the conductive pattern, CAM operator might change the drawing.
  • and many more other reasons

Just for purpose of demonstration, I configured my PCB Editor in a bad way, as if I had just done a mistake. I have „flooded“ my copper planes using lines of 1 mil (~25um) width and I defined such a pad-to-plane clearance, that two very small bridges can result in copper as shown in the next picture. You will see a lot more critical items in the picture but I just highligthed the bridges. What you are seeing in the following image is a thru-hole connector P1, it’s five pins and copper flooded around them.

The CAM operator might not allow this to go into manufacturing. For example for this reason. The conductive pattern is checked by an AOI (Automated Optical Inspection). Because the „bridge“ is so small, sometimes it will be there, sometimes it will not be there after manufacturing. This means, AOI will give warning many times. In order to prevent this -because this costs money-, CAM operator will change the GERBER data so AOI can run smooth. In this case possibly by removing this „bridge“ from the GERBER data. Now imagine we had a Thru-hole connector row with many of those small „bridges“ which got removed. Such change could result in a „slot“ structure as shown in the next picture.

Think further, if there were high-speed relevant signals in adjacent layer. Maybe such changes to the GERBER data would have broken the HF-return path of the signal. So manufacturing wise, all looks good, but we changed the electrical performance.

Clearly, we have to take care. We shall not have structures in our PCB fabrication data that forces the PCB fabricator to make changes to those data!

What could a PCB-Design engineer do about it?

Check what kind of DFF-rules your PCB-Editor does allow. I am familiar with „Xpedition“ and „PADS Professional“ from Mentor, a SIEMENS business [2] and show some screenshots how this software can enter and visualize such rules and violations. The DFF checks from „PADS Professional“, there are hundreds. You need to set the correct parameters as specified by your fabricator. Then you might save those for later reuse as a „DFF scheme“ for each of your fabricators like „Fab A STD“ as shown in the next picture or maybe some common scheme that fits for most of your fabricators.

If you have a good DFF tool, it will show you sample pictures of the type of check. If you are uncertain as to why a rule exists, well, ask your PCB fabricator, he can tell you the manufacturing reason behind.

If your PCB-Editor of choice does not allow for such DFF checks, then screen the market for such tools. THere are quite some tools on the market who can operate with Gerber or preferably ODB++/IPC2581 input. Also feel free to contact with me for consultancy.


We shall not have structures in our PCB fabrication data that forces the PCB fabricator to make changes to those data!

(of course I mean changes other than for photo transfer reasons)

As we want all our manufacturing to run smooth. That is why you might want to analyze your PCB-Layout using DFF-Design rules. You notice, we ask the PCB designer to take care about an issue that will increase or lower the profit of someone else. But at the end, it will be the profit of the company that you work for. Someone in your company must have an interesst to consider DFF/DFM methodology.

It might be that your PCB-Design tool is able to do DFF. It might be that you want to run additional software tools on your manufacturing data. No matter how you do it – do it!


[1] www.multi-circuit-boards.eu, public data about Basic Design Rules for PCB

[2] www.pads.com/professional, Manual on DFF rules

Leggi tutto...
Luca ValzaniaDFF – Design For Fabrication – How to check a solder mask?

How to verify „Current Carrying Capacity“ in a PCB Design

In diesem Artikel auf Englisch erklärt Hans Hartmann, Sales Manager DACH von Cadlog GmbH, wie die „aktuelle Tragfähigkeit“ in einem PCB-Design überprüft werden kann.

In this article I want to talk about current carrying capacity in PCB (Printed Circuit Boards). Why it is „difficult“ to accurately predict without simulation. How we can use charts to give practical recommendations such as IPC-2152 (and formerly IPC-2221) in order to avoid simulation. Then I will show how a PCB-Layout engineer can validate his design easily by means of an easy to use simulation tool.

„Can my trace of 5mm width carry sufficient current?“

Say we have a PCB of 160mm x 100mm x 1.6mm thickness. On this PCB, we have a trace on layer top and copper thickness is 35um (~1 oz).

What is the answer to the following question?

„I have a 5 mm wide trace, how much current can it carry?“

The clear answer is, „well, it depends“. Do you want to know when the trace reaches a temperature and starts to catch fire? Ok, the answer depends on some more details to be asked. So let me ask the question this way.

„How much current can a 5 mm wide trace carry, when it shall heat up by no more than about 40°C“

I am afraid to say, the answer is still, „well, it depends“. In a simulation tool capable of simulating all details of electrical current flow in solid bodies as well as all mechanism of heat transfer, I made some simplified setup to show what will happen to our trace. Let us do the assumption, that we have the above specified PCB and this 5mm wide trace on an outer layer at 35um copper thickness. In the following animation, I am showing simulation results for: 1A to 9A of current that passes thru that trace and I show the surface temperature above the trace on the PCB. Let me explain later about simulation setup and assumptions that I made.

Observation: We might get the impression, that we can pass 9A thru the trace and the temperature on the surface over the trace will nowhere heat up by more than <50°C above ambient (ambient @ 35°C in this case) and most of board area stays near ambient temperature. Let me explain basics about heat transfer and then do changes to our board and see what we will observe then.

The current flow thru the trace will result in electrical power P = I*I*R dissipated in the trace i.e. electrical current thru electrical resistance of the trace, electrical resistance even being temperature dependent. The electrical power is equivalent to „heat“. There are different mechanisms how this heat can „travel“ and leave our board from the point where it is produced to the „colder ambient“. If the heat would not leave at all, our PCB would get very hot and catch fire. Hopefully, the heat will leave the board towards the ambient and therefore will get to a temperature higher than the ambient, but still sufficiently low. Depending how easy the heat can leave the PCB, will determine how much hotter our board will be compared to ambient.

How can „heat“ leave our PCB?

  • Heat can exchange by „Convection“ i.e. on surfaces between solids and fluids, where our fluid is often just the „air“ surrounding our PCB
  • Heat can exchange thru „Radiation“ between surfaces of different temperatures by radiation in the IR spectrum
  • Heat can exchange thru „Conduction“ in solids, eg. thru constructions elements into the housing and from there to the ambient

For simplification of the before mentioned simulation, I modelled that heat cannot leave by means of radiation (which is not true in reality!). I further modelled for simplification, that the surface of the PCB can exchange heat with the environment, say at 5W/sqmK (this tells us: on a surface of one square-meter, for every 5 Watts exchanged thru the surface, our surface gets one Kelvin hotter than the ambient). Of cause such equal heat-exchange factor on every point on our surfaces is not valid in reality but is valid for simplifying my further illustration. In reality, our PCB is surrounded often by a fluid eg. „air“. The heat exchange on the surface depends how this fluid can take heat away from the surface. Which itself depends on parameters like amount of air-flow, the temperature of the fluid, the turbulence, gravitation, humidity and several more. If you want to make a more accurate analysis, you need to use a simulation tool that can calculate „Convection“ as well, besides flow of electrical current in the electrical conductors. Some „CFD“ software on the market can do this.

In any case, the heat must travel from where it is generated, to the surface of the PCB (to be more accurate, surfaces including all surfaces of electrical components on the board or surface enlarging elements such as heat-sinks). It depends on the actual PCB-Layout (mix and distribution of copper and dielectrics) of the PCB, how heat can travel inside the PCB towards the surface. It depends on the environment of our PCB (the „ambient“) how easy heat can leave from the surface.

Do you think that was already the final answer, a 5mm wide trace can carry 9A and heats up by no more than 50°C? Of course not…

Now let me make a change to the PCB. In an inner layer, I will create a copper-plane, same size as the board itself, fully flooded. This obviously impacts the thermal conductivity of the board and hence impacts how heat can travel from the trace to the whole surface. Copper has about a 1000x better thermal conductivity than FR4. Which means, a power-plane inside a board acts like a heat spreader. In the following picture you will see the simulation results. As you see, indeed, heat seems to travel more easy towards the whole surface from where is gets transferred away. The PCB stays cooler.

Observation: The same amount of current (9A) leads now to only ~11°C temperature raise above ambient.

Now let me show that what happens when we change the assumption of heat-exchange to say 3W/sqmK.

Observation: The same amount of current (9A) leads to ~14°C temperature raise above ambient.

Now let us change the copper plane inside the PCB and make it only flooded left half-side of the PCB. Again doing this, we do massively influence the thermal conductivity of the bare board. As you see, the PCB gets a very different temperature distribution on the surface. The heat can leave more easily from the left half side of the trace because on the left side we have the heat-spreading support of the copper plane in the left half-side.

Observation: The same amount of current (9A) leads now to 19°C and to 38°C temperature raise above ambient at the two shown observation points.

Just as a reference, in the next image, I am showing the inner layer temperature of the copper plane.

Conclusion: I have shown, that you cannot safely answer the question, how much current a trace can carry and how much the temperature will raise. Because it depends on many parameters, that you would need to use a simulation tool, capable of taking care of all those parameters and physical effects, to accurately predict what will happen. Such tools exist, that is not the problem.

However we cannot ask PCB Designers to make those time intensive simulations all the times.

What else can a PCB-Layout Engineer do?

For example in IPC2221 and much for actual in IPC2152 are charts, which show for traces the relationship between current passing thru a trace with a certain cross section area and how the trace will heat up. I do not dare to copy here any chart of the IPC-2221 for copyright reason, but try to get access to those. The charts look like this…

Essentially it gives the PCB-Layouter a guideline to know about what cross-section area of traces have to be constructed if a certain electrical current has to be carried and a certain max. temperature raise will be allowed.

Later on for IPC2152, much more parameters got taken into account and there is more information available to give guidance for a PCB-Layout engineer.

The values that I read from those charts (IPC 2221, traces on outer layer), I translated into other values like „max. allowed current densisty“ and the other values shown in the table.

Let us apply those values now for a small example circuitry. Transistors charging an inductance. See the two small current paths T1 -> L1 and L1 -> T2 and possible implementation in the PCB-Layout.

If both paths shall carry 10A @ max. 10°C temperature raise on a layer with 70um thickness of copper, then we require a minimum trace width of 3.8mm (149mil) which cannot be routed by a trace, because as of clearance rules against the shown obstacles. See in the following image an implementation by wide traces, but one segment can meet only up to 3.1mm instead of 3.8mm because of clearance rules.

So we decide for an implementation using plane-shape-routed style using one layer. This then requires, that we are using measurement tool in the PCB-Editor, visually try to identify the path that the DC current will take and measure if we always meet the required width ie. cross-section area. You hopefully agree, this can be painful and easy to make a mistake.

Now let us assume, the requirement for minimum cross-section could not be achieved by routing on one layer and we needed even a second layer. For creating a meaningful example, I used a second routing layer but constructed a different shape on the second layer.

Again, it is error prone task, to validate now on two routing layer if the width (cross-section) is meeting the requirements.

How could we do that in another way?

Remember our IPC tables. Our requirement 10A / 10°C at given 70um copper thickness, could also be translated into a requirement to have nowhere more than: 37.8 A/sq.mm (24.4mA/sq.mil)

Now let me show you how to do this with a simulation tool, that gives us the current density, anywhere in the copper, no matter if routing was done by shapes, by traces, no matter what the copper thickness was. All is taken care. For the purpose of this article, I was using „PADS HyperLynx DC Drop“, a simulation addon to the „PADS“ PCB-Design Software from Mentor a SIEMENS Business. By the way, this exists also in other configurations for users of all other types of PCB-Design tools, as long as they can export ODB++ or IPC2581.

Being in „HyperLynx“, I made two simulation runs. Nominating T1/T2/L1 as Voltage sources or Current sinks. Just as simple as modelling the Transistor pin as a 200V Voltage source and the Inductor pin as a 10A current-sink. This takes me very little (minutes) of setup time. Then I do simulations which run in a few ten seconds and give me results in form of reports and moreover very nice graphical pictures for documentation. For example a visualization of current-density as vector plot.

Current-Density Visualization: 10A between T2/L1 in the implementation with shape-based routing on only one layer.

Current-Density Visualization: 10A between T1/L1 in the implementation with shape-based routing on two layer with different shapes.

In those pictures you saw also an „inspection cursor“. So you could easily inspect the current density in each point of the layout. There are also other possibilities to plot the result, but this would go beyond the scope of this article.

If you are using PCB-Layout Editor and HyperLynx DC Drop, both from Mentor, then you would be able to setup within the Constraint Manager of the PCB-Layout Editor, the requirement for max. current density and then have it validated by the simulation tool.


I tried to illustrate that most accurate analysis of „current carrying capacity“ of traces or shapes will depend on many parameters. The same trace (cross section area) will behave different from project to project depending of the rest what else is on the board and what the stackup is. I gave inspiration that software tools exist, which can make a very detailed and accurtate prediction of what can happen. I then proposed that such simulations in every project is maybe too much of work for a PCB-Designer. Instead, we might want to look into IPC 2221 better IPC2152 in order to get advise what trace width (cross section area) to construct depending on actual design-parameters in our project. I then proposed to conclude about „max. Current Density“ and verify this max. current density by means of much more simple simulation tools. Only in case where you need to reach more of the possible limits, start to use simulations tools that can take care about much more relevant parameters to even more accurately analyse your PCB-Layout.

Call for action

If you have an actual project where you like to share your experience or are looking for support in verification, feel free to contact me.


IPC-2221 Documents

Users Manual of software „HyperLynx Thermal“

Screenshots of software „HyperLynx DC Drop“, „PADS Professional“ Layout Editor and Constraints Manager and „FloTherm XT“ CFD and Thermal Analysis software.

Leggi tutto...
Luca ValzaniaHow to verify „Current Carrying Capacity“ in a PCB Design

Propagation Delay of Traces / Inner vs Outer Layer Routing

In diesem Artikel auf Englisch erklärt Hans Hartmann, Sales Manager DACH von Cadlog GmbH, wie Signal-Laufzeiten in Leiterbahnen / Routing auf Innen- versus Außen-Lagen funktionieren.

I like to talk about different trace propagation delays in inner-layers versus on outer-layers. Again, reason being, that a couple of our customers had asked me how to set the correct constraints in PCB Design tools for doing length matching or propagation-delay matching. The formula that is most commonly used to calculate propagation velocity of an electrical signal on a trace is:

On typical PCB material we get the rule of thumb values at Er=4, we have about ~15cm/ns or ~169ps/inch. Now let us look a bit more in detail into the two types of traces and geometry assumptions for which the above formulas are valid. Traces that are on an outer layer and in reference to only one Reference-Plane (eg. a Ground Plane). We call them a “Microstrip”. Traces that are on an inner layer in reference to two Reference-Planes above and below the trace. We call those a “Stripline”. The next image shows what parameters determine the propagation delay (3 cm trace length).

With the parameters of that example, we see 207.5ps (and by the way an impedance of 50Ω). Notice also the electrical resistance (@20°C) and it’s inductance and capacitance.

The dielectric heights, dielectric constants and geometry of trace influence the traces R,L,C and hence Z0. (Z0 is frequency dependent. The value shown is based on the loss-less formula for higher frequencies.

The propagation delay depends “only” on the dielectric constant. See for example the second image in which I am showing and changed dielectric heights. Propagation delay is same 207.5ps, only R,C,L and Z0 changed.

Now notice in the next image, how another change in dielectric heights will not change the propagation delay but will change R,L,C,Z0.

Now let us see what parameters will influence a “Microstrip” on a outer layer. Because the “effective” Er (Epsilon-R) around the trace is important and because the trace is on an outer layer, we need a model, that is modelling the effect of a solder mask covering. See the next image. I made a stackup construction and choose a trace-width, so that we get again a 50Ω impedance for comparison.

A 3cm of trace-length would get 181ps of delay.

Observation: A 3cm microstrip and a 3cm stripline can get a very different propagation delay!

Conclusion: If we would route a bundle of traces, eg. a bundle of DQ/DQS data and strobe lines of a DDRx memory interface. If we length-match them, we still need to take care that length need to match on the same layer! So we can for example length-match them provided we have exact same length on outer layer (eg. same short length for fanout into Via).

But on outer layer, it is even more tricky. See the next image where I changed the thickness of the Soldermask.

Observation: It is impacting our Z0 but also our propagation delay in a significant way, at least if we have high-speed and timing critical signals on the outer layer.

Why is that? Because the EM-field around the trace goes further than the soldermask thickness and hence reach the surrounding material. In the case of this model it is “air” which has an Er=1.

Conclusion: Timing critical signals on outer layers need to get extra care to control the dielectric around. That is we need to control the soldermask material and thickness as well as taking care when there is eg. special coating on the surface or the PCB is not surround by “air” but maybe by a molding material.

Can this article now finish? If I am asking like that, guess what, I like to raise your attention on one more topic.

Is propagation delay the „only“ important thing that matters to meet signal timing requirements?

Of course not. Allow me to make comparison of two routing scenarios. See the next picture.

In both cases I made a setup in a signal integrity simulator, actually HyperLynx. The Setup is two IC-driver using IBIS simulation model of a 4Gb DDR3 memory, both times DQ0 pin, hence exact same simulation model will drive into a 50Ω load. The routing is on Layer-Top, thru a Via into Layer-3, thru a Via back onto Layer-Top.

Please check the details. This example is constructed in way

  • all trace segments are of same impedance, 50Ω
  • I implemented, by purpose, different lengths on inner- and outer-layer
  • the routed length is different, in order to match for the exact same “propagation delay”
  • both signals pass exact same way thru Vias

When we simulate this, we will expect that both signals arrive at exact same time at the load, right? The simulation result is in the next image and to our „surprise“, we see some amount of skew, measured at an arbitrary voltage level of 0.8V. The skew is 2.78ps!

Observation: The two signals do not arrive at the same time, although we constructed the two different interconnects to have exact same propagation delay.

The explanation is simple. Our traces have indeed the exact same propagation delay but the interconnect have different R,L,C even though, by construction, the Z0 is the same. In the next picture we see the RLC of each segment.

Path-1: R=0.189Ω / C=4.6116pF / RC=0.87ps

Path-2: R=0.240Ω / C=4.6246pF / RC=1.11ps

So the signal in path-2 arrives a little bit later at the arbitrary measurement threshold of 0.8V.

Conclusion: Even if you matched the timing of your signals, which is by far better than only length-matching, still then you have differences. The differences might be in the range of few pico-seconds. However, that might already be a relevant difference in a fast DDRx memory interface.

Now I make even another scenario to demonstrate you something. We simulate the same interconnect again, but this time with the IBIS models of the DQ0 and DQ1 of the same device.

Observation: The skew is now 3.49ps (DQ0/DQ1 models) compared to 2.78ps (both DQ0 models)

Explanation: The chip internal delay varies from pin to pin. In IBIS models, the package delay is often modeled by an RLC values.

[Pin]         signal_name model_name  R_pin       L_pin       C_pin

B3            DQ0        DQ          290.49m     1.31nH      0.45pF

C7            DQ1        DQ          264.40m     1.24nH      0.42pF

Which gives us even further details that could be obeyed. For example you might get values for package internal length or package internal delay that you can enter in your PCB design tools as design rules.


In this article I explained how propagation delays are calculated. You can use a variety of tools, even free spreadsheet calculation tools. I showed the calculations how they are made with HyperLynx software. Then I suggested to take care when you consider if “length matching” is what you want to do versus “matching of propagation delays”.

In case you are doing really high-speed designs and want to length match within a few pico-seconds tolerance, then you should use software tools that really do a simulation. HyperLynx with its „DDRx-Wizard“ is such a tool. It does not only validate automatically all timing, but looks also onto all waveforms of all involved signals and verifies them against many more rules that exist on DDRx signals.

Leggi tutto...
Luca ValzaniaPropagation Delay of Traces / Inner vs Outer Layer Routing

Vom PCB-Design bis hin zu Nanopartikeln: wie man zu einer großartigen Entdeckung kommt

Diese Fallstudie über PCB Design ist auch die Geschichte eines der größten wissenschaftlichen Erfolge des Jahrhunderts, der durch die harte Arbeit vieler Menschen und dank des Einsatzes leistungsfähiger und hoch entwickelter Technologien erzielt wurde.

Alles begann 1913 mit den ersten Theorien zur Quantenmechanik: der Grundstein für eine der größten Revolutionen unserer Weltanschauung. Zusammen mit Albert Einsteins Relativitätstheorie von 1905 ermöglichte die Quantentheorie von Niels Bohr eine realistischere Beschreibung grundlegender Naturphänomene im Vergleich zur klassischen Physik. Quantenmechanik beschreibt Strahlung und Materie sowohl als Wellenphänomen als auch als Teilcheneinheit: das Gegenteil der klassischen Mechanik, bei der beispielsweise das Licht nur als Welle oder das Elektron nur als Teilchen beschrieben wird.

Zwischen Einstein und Bohr entwickelte sich eine legendäre Debatte, die erst in der zweiten Hälfte der 60er Jahre mit der Entwicklung des sogenannten Standardmodells weitgehend gelöst wurde. Das Standardmodell, welches die Eigenschaften der fundamentalen Wechselwirkungen innerhalb der Materie beschreibt, hatte unter seinen Voraussetzungen die Rolle des Higgs-Bosons, einer Art Partikel, der die Materie zugrunde liegt. Das 1964 theoretisierte Higgs-Boson wurde erst 2012 live „gesehen“ und dank der vom LHC-Beschleuniger am CERN in Genf durchgeführten Experimente bewiesen.

Laden Sie das eBook mit einem erfolgreichen Beispiel für PCB Design für komplexe Elektronikplatinen herunter

``CAEN. Vom PCB-Design bis hin zu Nanopartikeln: wie man zu einer großartigen Entdeckung kommt``

Von Viareggio nach Genf auf der Suche nach den Geheimnissen der Materie

Der LHC im CERN ist ein Teilchenbeschleuniger mit einem Durchmesser von 27 km, mit dem die Struktur der Materie im subnuklearen Maßstab untersucht werden kann. Dank dessen Einsatzes können Antworten auf grundlegende Fragen zur Realität gesucht werden. Stellen Sie sich vor, wie hoch entwickelt eine solche Maschine sein kann! Ein Großteil der elektronischen LHC-Geräte ist von CAEN S.p.A. in Viareggio in der Toskana gebaut worden.

CAEN, ein Spin-off des italienischen Instituts für Kernphysik, liefert seit mehr als 40 Jahren die Elektronik für die wichtigsten Experimente im physikalischen Bereich: hauptsächlich Instrumente für Partikel- oder Strahlungsdetektoren. Dank einer engen Zusammenarbeit mit mehreren Forschungslaboren, einschließlich CERN, hilft das italienische Unternehmen Phänomene wie die Neutrinophysik oder die Untersuchung der Dunklen Materie zu erforschen. Darüber hinaus entwirft CAEN auch Instrumentierung für die Industrie.

Typische Produkte von CAEN sind Hoch- und Niederspannungsnetzteile für Teilchenphysik-Experimente und Geräte für die digitale Signalverarbeitung. Diese Geräte werden dank einer Forschungs- und Entwicklungsabteilung mit 40 Physikern und Ingenieuren hergestellt, die die Stärke dieses Unternehmens darstellt. Die R&D-Abteilung von CAEN ist genau das Element, mit dem die Elektronik für so fortgeschrittene wissenschaftliche Experimente wie die an Nanopartikeln eingesetzt werden kann.

Der Bedarf an fortschrittlichen Entwurfswerkzeugen

Eine weitere notwendige Voraussetzung für die Realisierung dieser hoch entwickelten Geräte ist die Verfügbarkeit von Entwurfs-Tools, mit denen komplexe Elektronikplatinen verwaltet werden können. „Als unser R&D-Team mit dem Entwurf von Hochgeschwindigkeitsplatinen mit DDR4-Speicher und Signalen bis zu 8 Gigahertz begann“ – so die Hardware-Projektmanager im Unternehmen – „konnten wir vom CAD keine Bestätigung erhalten, dass sie ordnungsgemäß funktionierten. Wir haben anfangs mit einem externen Partner zusammengearbeitet, der HyperLynx verwendet, um die Simulation in den Entwurfsablauf zu integrieren.“

Das R&D-Team hat somit beschlossen, PADS Professional in einigen Arbeitsstationen einzuführen, da die anderen PCB-Design-Tools Projekte dieser Komplexität nicht verwalten konnten. „Beim Entwerfen mit PADS Professional konnten wir eine viel schnellere Interaktivität feststellen. Insbesondere die Größe des Projekts rechtfertigte eine Entscheidung wie die Einführung eines neuen Tools im Unternehmen. Eine anfängliche Investition in die Ausbildung des Personals war erforderlich, war aber durch die erhöhte Entwurfsgeschwindigkeit gerechtfertigt. In PADS Professional lassen sich beispielsweise die Regeln für das Hochgeschwindigkeits-Timing einfacher festlegen. Wir haben aber auch erhebliche Unterschiede bei der Ausarbeitung weit verbreiteter Planplatinen und Routen festgestellt.“



Wenn PADS Professional den Unterschied macht

Die Fallstudie CAEN ist besonders bedeutsam im Hinblick auf die Unterschiede zwischen den verschiedenen Tools, die heute auf dem Markt verfügbar sind. Wenn beispielsweise andere Tools aufgrund des benutzerfreundlichen Interface-Designs den ersten Ansatz erleichtern, ist es viel vorteilhafter, ein Tool wie PADS Professional zu verwenden, wenn das Projekt etwas komplexer wird. Die Fähigkeit, schneller zu entwerfen, führt sofort zu geringeren Kosten und ein schnelleres Go-To-Market.

Aus rein technischer Sicht deutet die von uns beschriebene Fallstudie auf vier Merkmale, die PADS Professional einzigartig machen.

  1. Eine integrierte Datenbank für jedes Projekt, um die Integrität und Kontinuität der Daten zwischen Schaltplan und Leiterplatte sowie während des gesamten elektronischen Flusses durch einen synchronen Datenaustausch zu garantieren.
  2. Ein erweitertes Entwurfsregelsystem mit einer Struktur zum Festlegen physikalischer und elektrischer Regeln basierend auf einer Tabelle, die auf Datenbankebene strukturiert ist. Dies stellt die vollständige Einhaltung von Designproblemen durch eine einzige Anwendung sicher, auf die über den gesamten elektronischen Fluss zugegriffen werden kann.
  3. Die Verwaltung des Platzierungsplaners durch logische Gruppen, sodass man nicht mehr an jeder einzelnen Komponente arbeitet, sondern die Platzierung von Leiterplatten über die schematischen Logikfunktionen planen kann.
  4. Integrierte Signal-Integrity-Simulatoren vor und nach dem Layout basierend auf der weltweit anerkannten HyperLynx-Technologie von Mentor. Dies ermöglicht sowohl in der Spezifikationsphase als auch bei der Realisierung der Leiterplatte bis zu den heute verwendeten höchsten Frequenzen, die Kontrolle der Qualität der Signale.

Hier wird ein kleines Geheimnis hinter einer großen wissenschaftlichen Entdeckung enthüllt. Um herausfordernde Ziele zu erreichen braucht man sowohl kompetente, motivierte und einfallsreiche Leute als auch Top-Technologien. Um solche Technologien zu schaffen muss alles auf höchstem Niveau sein, beginnend mit der für das Design verwendeten Software.

Scarica ebook storia di successo PCB Design


CAEN. Vom PCB-Design bis hin zu Nanopartikeln: wie man zu einer großartigen Entdeckung kommt

Unser eBook enthält ein erfolgreiches Beispiel von PCB-Design bei komplexen Projekten. Erfahren Sie mehr über:

  • wie ein kleines Designteam über eine sehr hoch entwickelte und dennoch einfach zu bedienende Software verfügen kann;
  • wie Sie die Integrität und Kontinuität der Daten zwischen Schaltplan und Leiterplatte sowie im gesamten elektronischen Flow sicherstellen können;
  • come disporre di un sistema di regole progettuali avanzato e utilizzarlo al meglio;
  • wie man ein fortschrittliches Designregelsystem haben und es in vollem Umfang nutzten kann;
  • wie man einen Platzierungsplaner für logische Gruppen verwaltet;
  • wie Signal-Integrity-Simulatoren vor und nach dem Layout integriert werden.
Leggi tutto...
Luca ValzaniaVom PCB-Design bis hin zu Nanopartikeln: wie man zu einer großartigen Entdeckung kommt