Needing more than a spark test?

Also, regarding the trigger level, I implemented a simple rolling-average scheme to help reduce false triggers. It basically is a crude IIR filter. I could have done it differently but I think an IIR approach is better because it will recover more quickly after a pulse comes along. I also expect it to respond more quickly to a pulse so (hopefully) I don't need to compensate much for phase shift when I integrate pulses. BTW I haven't written the pulse-integration code yet, but it should be pretty simple. I hope.
 
The heavily-filtered approach is predicated on a fairly slow count rate. But due to the random nature of radioactive decay there still is a finite chance of pulse overlap. So some kind of pulse-qualification scheme is needed. My code actually uses three criteria that have to be satisfied: 1. The pulse amplitude needs to exceed a user-set trigger level. This is like the simplest kind of oscilloscope trigger scheme. 2. The pulse width has to be close to the pulse width expected for a single pulse. I think this will eliminate most kinds of pileup. And finally 3. The FastPeak function returns the RMS error between the polynomial fit and the pulse data, centered around the peak. Another user-set value (called F_level in the code) sets what differentiates a "good" pulse from a mal-formed one. This will hopefully address shorter noise pulses that might originate in the detector electronics.

Regarding being able to examine the code, I probably didn't get the permission stuff done correctly. It was supposed to work for anyone with the link. I also don't know if the link changes depending on the permission, so after changing it, here you go.
Have the code now. Thanks! Now I need to make up an Arduino project and give it a whirl! And thanks for the text embedded in the code - lets me get an idea what you were thinking when you wrote it!
 
I have placed my benchmark code for testing fast 8-bit parallel I/O on a Teensy4.0 on my drive. I don't have code for a T4.1 but I had summarized which 16 bits are contiguous (my memory was incorrect -- it's possible to do ONE 32-bit read and just shift the bits to get a 16-bit value). That document is on google Drive, too. The link is here. The code is a little messy because I was playing around with different schemes so please be kind :).

By using the port's Toggle register it's possible to output data pretty fast -- my benchmark indicates about 300MBytes/second. That's for a tight loop with nothing but calls to digitalWrite8Bits_Toggle() inside it. And it updates all 8 bits at the same time. A T4.1 could do 16 bits at the same rate.
 
Have the code now. Thanks! Now I need to make up an Arduino project and give it a whirl! And thanks for the text embedded in the code - lets me get an idea what you were thinking when you wrote it!
The comments were self-serving so I could revisit the code with a faint chance of figuring out what I'd done!
 
When reviewing my Arduino XRF code, I realized that the data format of the pulse info I send to the server won't be displayed correctly on the Arduino's serial monitor. It has to do with my decision to send raw floating-point values rather than ASCII text. That's easy to fix for the upcoming debugging phase, where results will be sent to the serial monitor ap. That decision might come around and bite me if it turns out that the floating point format used by the T4.x is different than what the server's PC uses. I guess I need to check that....
 
I have uploaded a newer version of my Teensy XRF front-end code. I have done a fair amount of debugging and found/fixed some problems. The most notable was an issue with the IDE using the wrong header file for ADC.h. It's ugly but I fixed that by specifying the full path to the one I wanted. The link I provided before should work. If not, let me know and I will post the updated one.
 
  • Like
Reactions: rwm
On a slightly different subject, I am working on a longer chuck wrench to make it easier to dial things in on my 4-jaw chuck. The ones I have now are too short so I'm always bumping the DI when I'm doing that. The new one will be about 5 inches long. One of the first steps in making it was to turn down all but about .7" to a smaller diameter. I ended up with about .002" of taper over 6 inches. Not too bad for a mini lathe. That's after I shimmed the headstock, based on my RDM testing. I suspect that a fair amount of that taper is due to slight misalignment of my tailstock -- I used a live center to support the end of the shaft.

I had the headstock off because, as some might recall, I replaced the headstock bearings & thought I might as well do what I could to improve its alignment.
 
On a slightly different subject, I am working on a longer chuck wrench to make it easier to dial things in on my 4-jaw chuck. The ones I have now are too short so I'm always bumping the DI when I'm doing that. The new one will be about 5 inches long. One of the first steps in making it was to turn down all but about .7" to a smaller diameter. I ended up with about .002" of taper over 6 inches. Not too bad for a mini lathe. That's after I shimmed the headstock, based on my RDM testing. I suspect that a fair amount of that taper is due to slight misalignment of my tailstock -- I used a live center to support the end of the shaft.

I had the headstock off because, as some might recall, I replaced the headstock bearings & thought I might as well do what I could to improve its alignment.
That was one of my earlier lathe tools that I made. For some reason, cost primarily, the keys were knuckle busters, way too short. I made two chuck keys, one for my 3 jaw and one for my 4 jaw. Don't know why the manufacturer chose different sizes, just another inconvenience for a mini lathe user to deal with.
 
I ran across this interesting of XRF pitfalls:

 
I ran across this interesting of XRF pitfalls:

There are dozens, or maybe hundreds of ways a test can wrongly indicate what metals are supposedly present. It starts with the calibration, and the purity of the samples, and the resolution of the XRF analyzer. We have first the analyzer measurement of return energies, then next we have the fumbles an analyzer software can encounter in interpreting them.

Then - when one looks at the possible spectra from elements, there are many from L-layer interactions that can spoof the presence of metals that are just not there! All the analyzers can show you a spectrum. Only the good ones can deduce from the presence of certain combinations of peaks whether a particular element is actually there, and get a probability.
 
Back
Top