Needing more than a spark test?

"Maxim Integrated and Analog Devices are now one company"
Actually, I think it might be more of a take-over than a merger, but one wonders if this means Maxim devices will appear in the LTSpice model library ?? :)
Having retired from Maxim Integrated before all that occurred, I don't have any direct knowledge on whether it was a take-over or merger. But from talking to some of my ex-coworkers who are still there my impression is that ADI holds the reins. But I think ADI also paid a lot for the priviledge. ADI had approached Maxim before with a buyout offer and was turned down, so I presume they sweetened the deal. I think it largely was a response to Texas Instruments acquisitions of some big guns like Burr-Brown, National and some ohers. Go big or go home.....

As far as Spice models, I think it is pretty likely that at least some of Maxim's parts will make it into the LTSpice library. There is a fair number of Linear devices present, another company they acquired, so there's some precedent there.

Speaking of TI, I've noticed that they have some pretty nice ADCs for decent prices. For example he ADS8471 is a parallel-out 16 bit 1 MSPS device with an integrated Vref generator that has a multiplexer built in so you can interface it with an 8 bit parallel port. Digikey has them for about $32. My Teensy 4.0 doesn't have 8 contiguous inputs but perusal of its schematic and processor data sheet found two blocks of 4 bits each in one of the GPIO registers that could be easily combined. With a 600MHz clock shifts don't take very long so I'm pretty sure I could get the full 1MSPS out of it.
 
@whitmore :
I realize that you need a better response, and I apologize. I happen to be going the route of integration by simply adding up ADC samples during a pulse, and indeed, the concept is to have a "circular" buffer store a good deal longer than any one pulse, or perhaps a bit of a crowd of them awkwardly overlapping. Combined with a trigger, or some effective software alternative, one gets to capture the whole event, including what happened "before" the trigger. I do love analog computing integrators, and I was originally minded to use one, but I ended up being practical. We get there with software, and save the cost.

The integrated waveshape
The rise and decay of the surge of electrons from released carriers, into the transimpedance amplifier, is the energy we want to collect. The area under the curve of the voltage pulse waveform resulting is the direct analog of the photon energy. [1] You are absolutely right that one can accurately capture this value using a nice op-amp integrator. The result arrives at the speed of electrons. It can outrun any digital sampling and calculating. I had in mind a op-amp with a quality capacitor in the feedback loop, shunted by a resistor and analog switch for reset. That stage could hold the value for an ADC, until reset.

Call it lazy, but I left out the analog integrator because a good one with low offset error is quite elaborate, both in design and parts, and one can save all it's cost and PCB tracking simply because there has to be an ADC there anyway, and it can go fast enough to get a pretty good integration by doing it in little pieces. This is doing it in software instead. Also, having "historical" data of the pulse in the buffer allows a whole bunch of other software tricks, like recognizing a pulse overlap, or spotting a out-of-range event, both reasons for rejection.

Analog computing was used when you needed accurate real-time answers at a speed no digital computing can ever match, such as in servo control of missile fins or some rocket engine gimbals. In some applications, probably military, I dare say it's still there, and now, curiously, is making a comeback. It's what I used to do - long ago :)

Mark's experiments indicate that the arrival of nice valid pulses would be quite slow, meaning from a few per second up to perhaps some hundreds of Hz. My guess was anywhere from slow up to perhaps 400Hz. For me, it's not so much about the arrival rate, as about the 13uS duration. Allowing bandwidth limited pulse distortion to "stretch" the pulse, to then have an easier time with a slow sampling ADC is not what I wanted to do.

[1] Energy --> current x time. Unfortunately only a few tens of thousands of electrons worth.
 
Last edited:
To add to Graham's comments, I believe there are other reasons to get the signal into the digital domain as soon as possible. The main justification is based on my own observations regarding the pulse waveforms I've seen, plus information from the Theremino folks who have commented that it is very important to process only very well-shaped pulses for generating the XRF spectrum. It is much MUCH easier to implement sophisticated pulse differentiation (in a logical sense, not the mathematical derivative sense) algorithms in the digital realm than the analog one. In fact, I'm willing to bet that digital far exceeds what analog can do in this regard with an approximately equal expenditure on processing elements, despite my love for analog.

My observations looking at the analog signal coming out of my PocketGeiger detector/amp chain is that the pulse shape can exhibit substantial variations, probably due to pulse overlap + noise of various types (environmental, popcorn, 1/f, etc.). This situation is a real killer when it comes to getting a decent XRF spectrum, and would be pretty hard for a purely analog implementation to address in a simple and cost-effective manner.

My concerns and emphasis in this regard is that the energy differences between iron and vanadium, chromium, cobalt, manganese and nickel are going to be close to the detector's energy resolution. If not less (more on that in the next paragraph). So anything we can do to improve the situation will help us.

So, what if our detector's energy resolution isn't good enough to tell the difference between, say, Iron and its neighbors manganese and cobalt. That's where the use of physical energy filters may come into play. As has been noted by me and others, the absorption vs. energy curves for these elements exhibit a significant drop at a different photon energy for them. So by using that property with a set of filters we could better differentiate these elements. The filters would be thin sheets of chromium, manganese, iron and cobalt. For some of these elements they're not necessarily easy to come by (especially for a reasonable price!) but that's yet another opportunity for hobbyists to get creative. I've shown that the filter's "finesse" can improve without limit as its thickness increases, but of course the overall attenuation increases as well -- so there's definitely a limit to the real-world improvement we can achieve. One thing in our favor is that we (may) be willing to wait longer to get a result, compared to a commercial lab.
 
I left out the analog integrator because a good one with low offset error is quite elaborate...there has to be an ADC there anyway, and it can go fast enough to get a pretty good integration by doing it in little pieces. This is doing it in software instead. Also, having "historical" data of the pulse in the buffer allows a whole bunch of other software tricks,

Yeah, I see the merit; in the old days, differential linearity was the big issue with using an ADC of the packaged
type, we always used the single-slope (rather slower than SAR) scheme because SAR
(successive-approximation with a DAC and feedback) didn't have as much accuracy as is now available,
and we could pick our precision, with a precise capacitor discharge current source.

I think the integration onto a capacitor still has signal/noise merit, and would prefer
(for peak shape analysis) a simple preamplifier pickoff and... using an outboard digital sampling
oscilloscope for those extra bits of info. But, with good linearity and fast conversion, you can
digitize before the peak is detected, which means the detection can be refined in field-upgradable software,
along with other interesting possibilities.

How difficult is it to get an outboard fast ADC-FIFO combination? Circular buffer ADC? That takes pay-attention load off
of the control CPU, and allows an integrator-based coarse peak detector (interrupt? polling?) in conjunction with
digitized peaks.

As for offset error, I'd think it's easy enough to take a few no-pulse-present samples after reset, throw
away outliers and use those samples for a zero baseline.
 
My concerns and emphasis in this regard is that the energy differences between iron and vanadium, chromium, cobalt, manganese and nickel are going to be close to the detector's energy resolution. If not less (more on that in the next paragraph). So anything we can do to improve the situation will help us.

So, what if our detector's energy resolution isn't good enough to tell the difference between, say, Iron and its neighbors manganese and cobalt. That's where the use of physical energy filters may come into play. As has been noted by me and others, the absorption vs. energy curves for these elements exhibit a significant drop at a different photon energy for them. So by using that property with a set of filters we could better differentiate these elements. The filters would be thin sheets of chromium, manganese, iron and cobalt. For some of these elements they're not necessarily easy to come by (especially for a reasonable price!) but that's yet another opportunity for hobbyists to get creative. I've shown that the filter's "finesse" can improve without limit as its thickness increases, but of course the overall attenuation increases as well -- so there's definitely a limit to the real-world improvement we can achieve. One thing in our favor is that we (may) be willing to wait longer to get a result, compared to a commercial lab.
 
Even if we do get a collection plot of fine high resolution, separated "bumps" at various energies, I think it likely that Mark's mentioned strategy of deliberately physically filtering will be important to identify the alloys and proportions we would be interested in. The energies coming from cobalt, nickel, and iron are such a close muddle that it's a tough call.

Mostly, we can correctly infer what is there from the combination of peaks present. I am hopeful we might get to see some counts from sulphur and phosphorus (if we wait some minutes), and just maybe, silicon. I think aluminium and magnesium might be beyond what we can expect, unless we discover a better diode.

The lead present in free machining steel alloys would not get exited by Am241 radiation at Pb main K-shell peaks, (too feeble), but should show L-shell energy at 10.5keV and 12.6keV. These might spoof arsenic, or selenium, but the presence of both peaks together more or less confirms it is very probably lead.
 
A program capable of Identifying elements using multiple peaks probably would need to use mathematical correlation. Easiest to do in the frequency domain. But l think worrying about this kind of stuff is putting the cart before the horse. I want to see if we can get a spectrum -- of any kind, with amazing or (more likely) so-so resolution -- before moving on to sophistication like energy filters, background subtraction, correlation using elemental standards and the like. I have to keep hauling myself back from considering those other fun things, myself.....to work on the more basic aspects first....
 
A program capable of Identifying elements using multiple peaks probably would need to use mathematical correlation. Easiest to do in the frequency domain. But l think worrying about this kind of stuff is putting the cart before the horse. I want to see if we can get a spectrum -- of any kind, with amazing or (more likely) so-so resolution -- before moving on to sophistication like energy filters, background subtraction, correlation using elemental standards and the like. I have to keep hauling myself back from considering those other fun things, myself.....to work on the more basic aspects first....
I cannot confirm yet, but it kinda looks like PyMCA suite may have identification smarts built-in.
 
Just a quick update on my front end code-writing. I've had some under-the-hood things I needed to think about and find solutions that (may) work well enough to test the whole system. The main issues were: how to determine the analog front end has settled enough to ensure the no-pulse baseline remains consistent during the course of acquiring a spectrum; how to (hopefully) reliably trigger on a pulse and then format the data for my fast polynomial peak-find; and, most importantly, how to ensure that the background ADC/ring buffer functions aren't going to be mucked up by the foreground routines that are doing things like checking the USB interface for commands from the host, or changing acquisition parameters like the sampling rate, trigger level, etc.

Along the way I've had to rip up preexisting code and re-do it in the light of problems I found in the approaches taken. For instance, I had originally stuffed integer ADC data into my ring buffer, the idea being that it would introduce less overhead in my ISR -- but then the subsequent pulse qualification stuff had to convert all the data into floating point. Since the Teensy has an FPU, I decided it made a lot more sense to go directly to floating point. This in turn simplified the trigger stuff because I now can just look at the pulses with my oscilloscope and immediately change trigger parameters, expected baseline voltage, etc, without dealing with scale factors, float-to-int conversions etc.

These nitty-gritty details are the things that actually will make or break the system, and, in my experience, are the most difficult to get right. They can produce some problems that are mighty difficult to debug.

All this said, I'm near to the point of flashing an LED when the program thinks a valid pulse has been detected.
 
No flashing LEDs yet, but I have at least gotten my code to compile. I had done quite a bit of code-writing without compiling so expected a lot of error messages but was pleasantly surprised by how few there were. Nothing difficult to figure out, mostly things like missing semicolons and a few typos. Of course, that doesn't mean there are no functional errors in the code :)

At this point the front end code is 407 lines long, so not so unwieldy as to make it overly difficult to maintain. I expect it to grow some as I get some real data to analyze. I can use the Arduino's Monitor tool to see how things are working, and can also use it to modify the pulse acquisition parameters on the fly.

There are a number of system constants that currently just have best-guess values so I know there will be some tweaking needed once I start getting some data to examine. For instance, I have a guesstimate for the baseline peak noise voltage, and that guess is used to help determine when a "real" pulse is detected.

Work on this has slowed down with the arrival of summer weather -- the lion's share of time has been spent getting the garden in shape, and attacking all the weeds that came up in our wet and cool spring/early summer. So I'm poking at it when time permits....
 
Back
Top