Needing more than a spark test?

If anyone else takes a similar route as far as making an enclosure, I have a little word of warning. Extruded aluminum plates appear to be slightly bowed. This caused some problems when drilling/tapping holes to assemble the thing -- centering the holes on the thickness didn't mean that they were all in similar locations, in a 3-D sense. I found it necessary to make a set of transfer punches out of some set screws. Using precision cast aluminum would have made it a lot easier, but at a considerably higher cost.

If I make another one of these I probably will take a different route, probably a thicker top and bottom with bent aluminum for the sides. The thinner aluminum will be much more forgiving of tolerance variations. If a scintillator is found to be the way to go, the enclosure could be made light tight using black silicone caulk.
 
One suggestion: shielding is always easier nearer the source! So, putting metal tubes or a plate with holes drilled the direction you want X Rays will radically reduce your off axis leakage for very little additional weight. My one X-ray patent talks about this trick:
US7639781B2


Sent from my SM-G892A using Tapatalk
 
I'm kind of jumping the gun a bit by looking at some software issues. As I've mentioned before, I think that using a least-squares fitting routine can help to reduce the impact of noise on the system -- it's not possible to average multiple pulses to average out noise, so any noise mitigation scheme has to be based on analysis of each pulse. The idea is to perform a second-degree least-squares curve fit of data around the pulse peak. It is very easy to determine the maximum value of a second degree polynomial, which looks to be pretty accurate to represent a Gaussian pulse around its peak. Hopefully the impact of noise will be reduced by the same sqrt(N) factor observed for other kinds of signal averaging.

The problem I'm finding is that most curve-fitting programs aren't very efficient in terms of execution speed. They calculate terms multiple times rather than re-using calculated data. Many also were written to be very generic, to solve for arbitrary-length polynomials. While these shortcomings are OK if you just want to determine ONE polynomial that fits some data you have, an XRF system can generate many pulses per second. So an inefficient least-squares solver could become a major limitation when it comes to getting a decent data rate throughput.

All this could change once I start getting real pulses to analyze so, once again, probably jumping the gun.
 
If the peak is approximately gaussian, I'd use a parabolic estimator. In particular, the one in this parabolic estimator link. The link shows the derivation. The equations are compact. At the bottom of the page there is a link to the matlab implementation. Of course, one doesn't need matlab to do the math. The estimator finds the peak and it's location based on three samples. I've used this in radar work. Hope this helps.
 
If the peak is approximately gaussian, I'd use a parabolic estimator. In particular, the one in this parabolic estimator link. The link shows the derivation. The equations are compact. At the bottom of the page there is a link to the matlab implementation. Of course, one doesn't need matlab to do the math. The estimator finds the peak and it's location based on three samples. I've used this in radar work. Hope this helps.
I agree that a parabolic (second-degree polynomial) fit works fairly well around the peak of a gaussian. But I want substantially more than 3 samples, on the theory that as the number of samples increases, the impact of noise (which is most assuredly present) decreases. My guess is the effective SNR is inversely proportional to the square-root of the number of samples. This means that at some point I should run into diminishing returns relative to the processing overhead. The other aspect is that a second-order polynomial only works well right around the peak of a gaussian pulse. So using a larger portion of the real pulse will introduce an error -- although error from this source should be consistent and possibly calibrated-away. It would be interesting to generate a gaussian pulse and see how well the polynomial fit works with it.

I have written some C code to test the idea. I add a random value to "pure" data, perform a least-squares fit to 20 points of the noisy data (centered around the peak), and then use the polynomial coefficients to solve for the peak value (this is trivial if it's a second-degree polynomial). The peak value obtained in this fashion is noticeably more accurate than just selecting the maximum value in the noisy data. But the "fitted peak" doesn't have zero error relative to the noiseless data.

I'm not providing actual numbers because the data I'm playing with may not correspond very well to the actual situation.
 
From reading your previous post (the one I had responded to) it wasn't obvious [to me] what the issue was. I got "curve fitting" and "computationally complex". From that I jumped to simple parabolic estimator. Sorry to have over simplified both the problem and solution. Extracting single pulse information is "sort of hard" especially at low signal to noise ratios. (SNR) Speaking of which, what are the single pulse SNR's expected to be? Are you trying to estimate the energy? Power, or something else? What is the information that you are trying to extract?

Just as an FYI, the parabolic estimator works very well with values in dB's. That means that the surrounding samples (samples on either side of the peak) may not even be close in magnitude, yet it finds the peak and amplitude. I've used it on radar FFTs (in dBs) to estimate target parameters.
 
One may be able to sidestep almost all the calculation, provided the shape of the pulse is regular.
In theory the energy is the integrated area under the pulse waveform.

In my scheme, I sum 20 samples in about 13mS to get the value for the bucket sample, because the A/D conversion is just about fast enough to do that without getting crazy expensive. But.. there may not be a need to press home an exact integration. Basically, a higher energy pulse simply has a bigger peak, and the photon has the good grace not to "get weaker" on the way to the detector. The X-ray photon is already the smallest packet there can be. We may as well associate that peak with an element.

Before committing a value to a bucket, there is the phenomenon of pulses being augmented by other pulses happening before the decay duration is over. Their currents do sum. The strategy is to have the scheme recognize the fact. The only practical way is to is to gate the input once triggered, for the known period of a pulse, and reject any out of range. Maybe even forget the gating, and simply reject values that do not fall within an amplitude window corresponding to an element, this being found empirically.

I have looked through the energies from the possible elements, and I can't see any that would produce a combination that could not easily be rejected.

It may be that all one needs is an accurate, and RF speed fast, peak detector operating in a triggered window, and ignoring the full integration. The software need only discover a stored peak to be in a valid element range to identify the element bucket, and increment the bucket count. The idea here is that past implementations have relied on extremely crude pulse counting on very poorly filtered sample remnants, and still managed to identify elements buckets.

I have not had opportunity to try some of these notions, but if some signal processing can bypass a whole ton of software, I like it! Also, with such semi-analog approach, the (accurate) answer delivers at the speed of electrons, and can outrun any software depending on sampling, and then sorting those same signals.

Yes - I know. This is just my 2c, and many times I can get it wrong. I am envious that Mark has already got to the hardware. My home circumstances have temporarily derailed all this fun stuff.
 
  • Like
Reactions: rwm
From reading your previous post (the one I had responded to) it wasn't obvious [to me] what the issue was. I got "curve fitting" and "computationally complex". From that I jumped to simple parabolic estimator. Sorry to have over simplified both the problem and solution. Extracting single pulse information is "sort of hard" especially at low signal to noise ratios. (SNR) Speaking of which, what are the single pulse SNR's expected to be? Are you trying to estimate the energy? Power, or something else? What is the information that you are trying to extract?

Just as an FYI, the parabolic estimator works very well with values in dB's. That means that the surrounding samples (samples on either side of the peak) may not even be close in magnitude, yet it finds the peak and amplitude. I've used it on radar FFTs (in dBs) to estimate target parameters.
I hadn't thought about doing basically a log conversion. Given what we're trying to do -- basically determine the relative energy of an x-ray photon as accurately as possible on the cheap -- the log of the signal probably won't work.

The idea is that elements will fluoresce when irradiated with x-rays, and the energy of the emitted photons is characteristic for each element. We use a proportional detector -- a semiconductor diode or a scintillator -- to output a pulse whose height is proportional to the x-ray energy. The pulse height is converted to an index into an array, and that array element is incremented by one. Over time the array builds up a list of counts that when plotted will show the type and concentration of elements present in a sample. The title of this very lengthy thread is "needing more than a spark test", so the intent is to come up with a DIY way to determine the composition of alloys a scrapyard scrounger might encounter.

The x-ray source used to excite the fluorescence comes from radioactive smoke detector sources. They emit 60Kev gamma rays, but their fluence (A.K.A. brightness) is pretty low. A .25" thick aluminum plate will attenuate 60Kev gammas by 99% and that's what I'm using for an enclosure. The x-rays emitted by elements of interest like iron, chromium and nickel are around 6Kev, which are even more strongly absorbed by the aluminum.

We've occasionally departed into some rather deep discussions on things like circuit design, PCB assembly ideas and the like: but so far we haven't been taken to task by the moderators. Anyway, they probably are amused by this small group of weirdos :).
 
One may be able to sidestep almost all the calculation, provided the shape of the pulse is regular.
In theory the energy is the integrated area under the pulse waveform.

In my scheme, I sum 20 samples in about 13mS to get the value for the bucket sample, because the A/D conversion is just about fast enough to do that without getting crazy expensive. But.. there may not be a need to press home an exact integration. Basically, a higher energy pulse simply has a bigger peak, and the photon has the good grace not to "get weaker" on the way to the detector. The X-ray photon is already the smallest packet there can be. We may as well associate that peak with an element.

Before committing a value to a bucket, there is the phenomenon of pulses being augmented by other pulses happening before the decay duration is over. Their currents do sum. The strategy is to have the scheme recognize the fact. The only practical way is to is to gate the input once triggered, for the known period of a pulse, and reject any out of range. Maybe even forget the gating, and simply reject values that do not fall within an amplitude window corresponding to an element, this being found empirically.

I have looked through the energies from the possible elements, and I can't see any that would produce a combination that could not easily be rejected.

It may be that all one needs is an accurate, and RF speed fast, peak detector operating in a triggered window, and ignoring the full integration. The software need only discover a stored peak to be in a valid element range to identify the element bucket, and increment the bucket count. The idea here is that past implementations have relied on extremely crude pulse counting on very poorly filtered sample remnants, and still managed to identify elements buckets.

I have not had opportunity to try some of these notions, but if some signal processing can bypass a whole ton of software, I like it! Also, with such semi-analog approach, the (accurate) answer delivers at the speed of electrons, and can outrun any software depending on sampling, and then sorting those same signals.

Yes - I know. This is just my 2c, and many times I can get it wrong. I am envious that Mark has already got to the hardware. My home circumstances have temporarily derailed all this fun stuff.
The hardware isn't quite there yet but getting close :). Hopefully Zeno isn't waiting at the other end of the bridge!

My concern regarding the need for more sophisticated signal processing is totally based on what I've seen coming out of the rather poorly-designed Pocket Geiger. Lotsa noise, which will surely degrade the energy resolution we get. If that can be improved, perhaps by going to a zero-bias-voltage approach, a relatively simple peak detector might just do the trick.

I have to say that the Pocket Geiger wasn't designed for this kind of application. It really IS meant to be a relatively inexpensive geiger counter, so why should we be surprised that it's not ideal for what WE want? If need be, we can (like Graham) just harvest the detector -- it actually is cheaper to buy the entire Pocket Geiger than the detector from Digikey or Mouser!

Hopefully, once some data starts coming out of a lashup or two we can zero-in on an approach that is feasible for the ambitious H-M member.
 
If I understand correctly the source is a radioactive element emitting low energy X-rays? And these sources emit photons at random times, if I remember correctly. How does one count two photons that occur so closely that the smeared out scintillator response blurs them together? Is it possible to work in the spectral (fourier) domain rather than the time domain? That way you don't need crazy fast circuitry, correct?

Is the fluorescence in the visible spectrum? The lines emitted are unique to the chemical makeup? How do you determine the power in each frequency? Are you making a sort of spectral analysis tool? Like a diffraction grating coupled with a frequency insensitive detector? Or a fourier spectrometer - much more sensitive, but a lot harder to make!
 
Back
Top