Needing more than a spark test?

Definitely we need to stay linear for the first collection of counts. Those outside the maximum are dropped.

Then, a mapping needs to be applied, to give the various energies the correction needed to compensate for the detection probability characteristic of the PIN photodiode. Those with energies around 7KeV to 10KeV have near 100% probability of making a count happen when they arrive. A straight on high energy photon Am241 hit has only about 3% chance of getting counted, but we only do that as part of calibration, to set where the 60KeV bucket is on the X-axis. The energies outside of the "near certain" range need the count compensated as per the probability curve inverse.

The vertical scale is about prevalence, not energy. What if we had several materials in there, and a bunch of high counts, because we left it counting for longer while we had a coffee? The total is from all materials, and the fractions are discovered. The Y-Axis step increment puts 100% at the top.

If we want to expand the view to "zoom in" on a range of energies, we simply don't collect counts of energies outside that range. The limited range gives us a "new" 100%. All the peaks will "get bigger". The X-axis gets more "spread out".

I have to remember it is a histogram. Everything about going logarithmic on energies will do stuff to the X-axis, and that makes sense. Instead of excluding higher energies from the plot, we can use a logarithmic X-axis. This would be pretty good, especially if it was an option along with zoomed-in ranges, rather than a default.

Using logartimic, or any other compression on a histogram Y-axis is something I can't quite get my head around. That is not to say it is never done, just that I can't see what it gives is ever meaningful.
Yes, I sort of forgot that the y axis is number of events. Nonetheless, it would be interesting to display that way. Once I get something functional, I would play around with that, simply to have crossed it off my list. Often interesting features are discovered by looking at data in different ways. Log log plots are not uncommon. We don't have to lose any information, these microcontrollers have sufficient memory to store both arrays. Besides the ADC, the processor is comparatively loafing along, a log function or two won't hurt it... Not to say that the processor isn't working, but I hardly think it is fully loaded or anywhere near it, at least based on my Teensy based ELS experience.
 
Yes, I sort of forgot that the y axis is number of events. Nonetheless, it would be interesting to display that way. Once I get something functional, I would play around with that, simply to have crossed it off my list. Often interesting features are discovered by looking at data in different ways. Log log plots are not uncommon. We don't have to lose any information, these microcontrollers have sufficient memory to store both arrays. Besides the ADC, the processor is comparatively loafing along, a log function or two won't hurt it... Not to say that the processor isn't working, but I hardly think it is fully loaded or anywhere near it, at least based on my Teensy based ELS experience.
If you mess at all with the values in a histogram curve, by any sort of non-linear scaling, all you get is lost information. It's worse than lies! What you get is the plot saying the percentages of materials that were low, or rare, suddenly are there in abundance. I say don't do it, or at least take care you know what you see.

The effect you are looking for, to explore those peaks that are in these low concentrations (that's not the same as "low energy"), happens automatically when you narrow the range.

I suppose you can get an expanded more detailed look at the materials that are only there in small percentages by truncating the Y-Axis count, such that (say) the 10% point is at the top of the Y-Axis, and all those that would have made their peaks look smaller, are now off the top of the chart.
Using logarithmic Y-axis would compress those peaks, but that non-linearity is applied over the whole range, and the perception of how much of these minority materials is there gets misleadingly skewed. That's only a visual impression. If the Y-axis logarithmic divisions were correctly labeled, you can still estimate percentages. There are dangers in taking the logarithm of percentages. It becomes a display of ratio - like dBs, which need a reference they are counted relative to.

I am going to collect the data set count vs energy, and let PyMCA accept the file, and see what it does with it. The only priority is to make the energy seen, which decided which bucket should be incremented, be as accurate as possible. I am, so far, somewhat skeptical.

[Edit: Newsflash! - Mon paquet de feuille de plomb est arrivé !
Lead sheet! Now I have to figure how much cut off needs to be melted.
FreeCAD can do that from a volume - I think. I have never done that before. ]
 
Last edited:
Lead sheet! Now I have to figure how much cut off needs to be melted.
FreeCAD can do that from a volume - I think. I have never done that before. ]
Use the macro FCinfo. Select the body or part. It calculates the volume. You can set the material to Pb and the mass of the part is calculated. It's like magic. That's why I knew my "plastic source ring" would be 210 gms if in lead. Of course you will need to melt more, there's slag that is formed and not everything in the pot quite makes it into the mold. Basically you don't want to run your melting pot too low, especially if you don't have good temperature control. The pot temp tend to go high. And please cast outdoors...
 
Use the macro FCinfo. Select the body or part. It calculates the volume. You can set the material to Pb and the mass of the part is calculated. It's like magic. That's why I knew my "plastic source ring" would be 210 gms if in lead. Of course you will need to melt more, there's slag that is formed and not everything in the pot quite makes it into the mold. Basically you don't want to run your melting pot too low, especially if you don't have good temperature control. The pot temp tend to go high. And please cast outdoors...
Thanks for the FreeCAD tip! :)

There is also what we find from the guys who cast bullets. You throw some candle wax in, and stir it around. Some prefer sawdust from non pressure -treated wood. Paper towels also work. This "fluxing" puts carbon in there, which collects up all the impurities and junk, which floats on the top and can be removed. I also hear of using borax. There are some things to be learned about technique here. The whole scene is messy, stinky, flames, soot and toxic everywhere!

"First melt" lead from buildings might contain stuff like drywall gypsum and other contaminants. Zinc, steel and other unwanteds does float into the slag. Second melt should produce shiny lead, with colourful oxide film effects. I have to allow excess, to arrange some to get a grip on it, for a start!

 
Last edited:
Hi Mark
The apparently constant energy outcome, regardless the material?
I know this sounds a bit crazy, but that would happen if somehow a pulse scaling normalization code was inadvertently left someplace it shouldn't be.

Unlikely, I know. Long shot! I don't know the detail of your code, and you are better at programming stuff than I am anyway. It was just a passing thought.
No one is immune from making programming mistakes. I've proven it myself many times :).

I'll upload the most-recent version of my code for folks to look at. It can be found here. Ya never know....

My pulse-qualification code starts at line 591. The pulse area variable is initialized on line 606. As I've mentioned before, I have three pulse qualification "filters". One eliminates pulses that are too short, another eliminates pulses that are too long and the third rejects pulses if the second-order polynomial fit around the peak isn't good enough.

It's worth pointing out that I didn't change ANY of the pulse-qualification code, I just changed what I enter into the MCA array. I did have to change the MCA's X range because the integrated pulse values are much larger than the peak values -- but that's easy to do, it's a parameter passed to the MCA initialization routine. So I guess one thing to review is how the binning is written. The MCA code is a library function I wrote and can be found here.
 
I may have discovered a problem in my MCA library. I haven't had an opportunity to see how it affects the end result, but....

In the MCA_begin code I calculate the bin size for the MCA's x axis. It is used to convert a floating point input (like voltage peak or pulse area) into an integer that indexes into the MCA array. It uses the length of the array, an unsigned integer, and divides that into the specified data range -- a floating point number. I don't have a typecast from integer to float for the integer....with no resultant complaint from the compiler! I don't know if it's just quietly doing the typecast or not (the latter case would definitely cause problems!). This would be atypical behavior from a C compiler but it's designated as "Cpp" code, to suit the Arduino IDE. Maybe that's the difference?? I'd think that a more modern language would implement even stronger type-testing but mebbe not?
 
More reading has revealed that more-recent C++ standards do permit "implicit" type conversions. Nonetheless I'll edit my MCA code and see what happens.
 
I may have discovered a problem in my MCA library. I haven't had an opportunity to see how it affects the end result, but....

In the MCA_begin code I calculate the bin size for the MCA's x axis. It is used to convert a floating point input (like voltage peak or pulse area) into an integer that indexes into the MCA array. It uses the length of the array, an unsigned integer, and divides that into the specified data range -- a floating point number. I don't have a typecast from integer to float for the integer....with no resultant complaint from the compiler! I don't know if it's just quietly doing the typecast or not (the latter case would definitely cause problems!). This would be atypical behavior from a C compiler but it's designated as "Cpp" code, to suit the Arduino IDE. Maybe that's the difference?? I'd think that a more modern language would implement even stronger type-testing but mebbe not?
Ooh - yes, I will have a look, but I won't be directly commenting on the coding. I remember the type casting in C, and how carefully one had to manage it, when I did a C course (eons ago :) ). Trying to do arithmetic with an integer getting involved with numbers that produce fractional remainders, even with other integers! . Oh boy! Better still, numbers that had a good floating point value getting turned into truncated integers. It can happen easily. I will be having a go using Java, with some help, but that too has huge potential for mess-ups.

For now, let's assume you get to all the places where data type conversions in the code might have might messed with you, because I know you will, and you are good at this stuff!

----------------------
The way I imagine this works..
So just thinking about an artificial scenario where we have a pulse, and it has a integrated count-up, representing it's area, and that be the analogue of the energy. Now imagine we have the same sort of pulse, but with a higher amplitude. Quite reasonably, we can say that the new pulse has a higher energy, even if we had the duration remain constant. (It might not be so). Suppose we force that. We take a fixed number of samples during a pulse we believe is valid. We force the duration of a pulse. We choose a time long enough for the good pulse to have subsided, and reject others.

Now getting to the size of the array for the number of buckets. How many distinct separate areas can we resolve from little pulses to big pulses? This is about the resolution of the X-axis. We need it be such that from one bucket to the next, the change in eV is reasonable. If we make too many, the counts we have have to be placed throughout it, making the actual count in most buckets rather low.

If too few, then some range of energy values can all increment the same bucket. The buckets can show higher counts, but there are fewer of them.

So how big should the array be? If we think we can tell the difference between (say) 6KeV and 7KeV, we can use 60 buckets. We are not going to see any energy beyond 60KeV. That is the extreme right side of the X-axis. If we can tell the difference between 6KeV and 6.7KeV, or 6.3KeV and 7KeV, we can use 120 buckets. If we are good enough to tell the difference between 6KeV and 6.2KeV, we can have 600 buckets.

Whatever number we choose for the array, it is, of course, integer, and stays fixed, The value in it is floating point. Some kinds of array variable can have dynamic length. I think that kind we definitely do not need. We do need implicit conversion for the answer to any floating point calculation be cast to floating point even if one of the inputs was an integer.
 
Last edited:
More reading has revealed that more-recent C++ standards do permit "implicit" type conversions. Nonetheless I'll edit my MCA code and see what happens.
Yup - you got me hooked. I am reading your MCA library function. :|
 
A key part of my pulse qualification code is how it extracts a whole pulse from the ring buffer. I originally had some code that tried to grab the data in step with the ADC but I just couldn't get it to work right. Something about the sequencing. So instead of doing that, I store the start-of-pulse index into the ring buffer, then use a delayMicroseconds(200) call -- long enough to ensure that the entire pulse has been acquired, but short enough that the pulse data won't be overwritten by my ADC ISR. Then I suck the entire pulse out of the ring buffer, starting at the start-of-pulse index. I use modular arithmetic to take care of the case where the pulse wraps around to the start of the ring buffer. Now I can process the pulse data without worrying about the pulse being overwritten.

There are a few refinements though. My pulse processor is triggered when the rolling average exceeds the trigger point, so to integrate the entire pulse I scan backwards through the ring buffer, starting at its current index, until the pulse data is equal to or below my data-clip point. The pulse is defined as "ending" when its value goes back down to equal to or less than DataClip.


The data-clip point is established during the setup() procedure and is equal to the baseline voltage (a highly-filtered low-pass version of the input signal) PLUS 3 times the RMS noise around the baseline. My typical dataclip value is around 40mV with my current analog circuitry.

One possible source of variation that could reduce the effective resolution of a pulse area calculation is the magnitude of DataClip. The pulse won't have really terminated at that point. But the more I extend the effective pulse length (by lowering DataClip), the more likely that another pulse will occur and mess up the calculation. Some experimentation is needed here.
 
Back
Top