The raw data from the 10/26/11 Iodine 133 and Xenon 133 detections are available for download in the links below.
The following charts are of the all up data which was obtained over a one week period. NOTE THAT LONG HALF LIFE RADIATION IS PRESENT THROUGHOUT THE ENTIRE SAMPLE PERIOD.
The identification of Rn-222 was based on a subtracting the fitted I-133 data from the initial detection period. The methodology increases the inherent noise level in the data set. Because such noise propagates down the detection time period, the method was only used on the initial sampling period. The result left a source with a composite 29 minute half life. The expected composite half life of Rn-222 daughters is 36.5 minutes.
For an indication of the quantity of variability expected in the data by the POTRBLOG team, note the number of significant digits in the exponential curve fit data. A future video on this subject may follow.
The first download is the raw sample data in a CSV format saved as a TXT file
After navigating to the filedropper site, select "DOWNLOAD THIS FILE"
The second download is the raw background data in a CSV format saved as a TXT file
After navigating to the filedropper site, select "DOWNLOAD THIS FILE"
Ms. X,
ReplyDeleteMy two-isotope model is complete for this data set. What type of spreadsheet format do you prefer, and how is the best way to get it to you? Thanks.
Ms. X,
ReplyDeleteI have a few constructive criticisms on the data analysis of 10/26/11, which also applies to the earlier data. First, there is just too much noise in the cps data to use more than two isotopes, period. Look at R^2 (correlation coefficient) for Xe133. It's only 0.041! R^2 needs to be as close to 1 as possible, preferably to at least two 9's for the model to be good and the noise to be low enough. Second, averaging is your friend for reducing noise. You should be more aggressive in using a 15-second averaging time, even for the shortest half-life cps. For example, 15 sec = 0.25 min << 58 min for isotope 1. As long as the averaging time is << the half-life for the shortest-lived isotope you will not smear the data. Third, the baseline cps should be averaged to a constant (no slope), and subtracted from the unfitted cps. A baseline cps that is subtracted from the unfitted cps, point-by-point, will increase the noise by a factor of ~ 1.4. Forth, while fitting the Excel exponential trendline is easy, it doesn’t reflect the reality that the total cps are probably the sum of one or more isotopes plus a background cps. One of the problems in using two different trendlines for different parts of the curve is that you have to assume that the whichever isotope you fit for is not being influenced by the presence of the other. This depends in detail on the count rates and the relative decay times for both of the isotopes being sufficiently different from one another, and requires that a significant amount of raw cps data is thrown out at the “break point” between the two isotopes. This increases uncertainty of the results. You have to use the “solver” add-in for Excel. You simply must. If you have the original CD you can easily install it. The spreadsheet I will send you has the raw cps model defined as follows:
N(t) = Nb + N1*EXP(-ln(2)*t/tau1) + N2*EXP(-ln(2)*t/tau2),
where the background count rate is Nb, the decay rates for isotopes 1 and 2 are N1 and N2, respectively, and the half-lives for isotopes 1 and 2 are tau1 and tau2, respectively, and t is the time in hours. I assume that N1 and N2 are unrelated, that is, N2 is independent of N1. This may not be true if N2 is a decay product of N1. If N2 is a decay product of N1 then a source term for N2(N1) would be needed.
I found the average background cps for your background data to be ~ 0.435 cps. I subtracted it from the raw cps data, then I used a 5-point running average to the 3-second cps data to make it comparable to the 15-sec cps data. A column was created with the above formula in it, and another was created to contain the square of the difference between the raw cps and the model, point-by-point. This column was summed (sum-of-squares) and the “solver” was asked to minimize it, subject to the parameters Nb, N1, tau1, N2 and tau2. Here are the results:
Nb = 0.063 cps, N1 = 3.04 cps, tau1 = 0.433 hrs (26 min), N2 = 1.34 cps, tau2 = 10.21 hrs, R^2 = 0.996. Nb is small, as expected, since the background cps has already been subtracted out. I hope this helps you identify the two isotopes. Note that these results do not agree with your 5.1-cps decay rate, 58-min. half-life for the red (Rn222 + I133?) raw cps data, or your 1.737-cps decay rate, 19.25-hr half-life for the green (Iodine 133) raw cps data. I'm doing this analysis on the latest version of Libre Office “Calc” which is an Excel clone, however when I save this as an Excel file the plots get dropped from the spreadsheet. I will try to attach these separately as graphics. I do appreciate what you are trying to do, and I believe that you have discovered something significant. Let me know if you have any questions. Thanks. Stay tuned for the attachments.
Ms. X
ReplyDeleteOops, my bad. It looks like I assumed the averaging time was seconds but it really was minutes. Using a 15 minute averaging time will be too long for a 58 minute half-life. I will rework the numbers using the 3 and 15 minute averaging times.
Ms. X
ReplyDeleteHere are the results using your 3 & 15 minute averaging times:
Nb = 0.063 cps(unchanged), N1 = 3.57 cps(up from 3.04 cps), tau1 = 0.427 hrs or 25.62 min(down from 25min), N2 = 1.36 cps(unchanged), tau2 = 10.20 hrs(unchanged), R^2 = 0.983.
Ms. X
ReplyDeleteTry this link.
http://www.4shared.com/folder/AZ_9jUMG/_online.html
Be Well,
ReplyDeleteThanks for the input!
Right now I'm working on the data for the current detection, but I did give your analysis a quick read. I will delve into more it more deeply, shortly.
In the mean time, here is the 30 second off-the-top-of-my-head response analysis.
I think what you are seeing is solver bias based on a poor inherent fit "R" concept for the problem space. I suspect that if you spiral into the next step of your solution by left truncating the data to remove any data contamination from your estimated 25 minute short half life isotope and rerun your solution tool with the new data set you will see some inherent bias in the previous calculation of the estimated 10.2 hour half life isotope.
A better "R" concept for this problem set would be a function of the end point variances between a specific delta T, relative to the overall estimated half life.
With respect to limiting the analysis to just 2 isotopes, I can see that from a deterministic stand point. However, there is information contained in the data set past the 2nd isotope and that data can be used probabilistically to identify an isotope based on the boundary conditions, concurrently it may also lend credence to the identification of the prior isotope. That credence may then be used to re-spiral the solution set with better starting conditions.
The end summary would be that a very strong "R" value in this problem space can very strongly point to the wrong identification; where as a low "R" value does not necessarily point to a wrong a identification.
I suspect that if a calculation is made of the 95% confidence limits on the third isotope half-life, that the limits will be large due to the fact that you are fitting two extra parameters to what is essentially noise. A realistic two-isotope model fits the entire data set pretty well without any additional parameters for a third isotope, and adding the third isotope to it would over specify the model resulting is a lack of convergence on a solution. I have tried this already and verified this. Another problem is drift in the background cps caused by the Inspector. I recall seeing a slight drift which could also increase the uncertainty when you are trying to fit a third isotope line to the low cps signals.
ReplyDeleteMs. X
ReplyDeleteOne can easily check on the self-consistency of your three-isotope straight line segment model as follows:
The predicted raw cps at t = 0 min. should be the sum of all three of your exponentials amplitude factors (shown in the first figure) at t = 0 min. or simply 5.1004 cps + 1.7378 cps + 0.8077 cps = 7.6459 cps. The first raw cps count rate number at t = 0 min is only 5.81 cps. So, for the most significant data point (the one with the highest signal to noise ratio) your model is already in error by ((7.6459 cps - 5.81 cps)/5.81 cps)*100% = 31.6%! For your model, the amounts of each isotope at t = 0 min. are too high and need to reduced significantly. But when you do guess what? The rest of the model does not fit the rest of the cps data unless you change the half-life decay rates too. This is why we use the solver.
Ms. X
ReplyDeleteOn Nov. 3rd you said:
"For an indication of the quantity of variability expected in the data by the POTRBLOG team, note the number of significant digits in the exponential curve fit data. A future video on this subject may follow".
My question for you is, do you really think that the .1004 in 5.1004 is really significant? If you do, then you may not understand how the Excel trendline routine really works. The number of digits displayed in the trendline equation coefficients has nothing to do with the underlying accuracy of the coefficients. The Excel trendline routines weren't designed to do that. You will need to calculate what the model accuracy is using the Excel LINEST function at http://office.microsoft.com/en-us/excel-help/linest-HP005209155.aspx.
Be Well,
ReplyDeleteThose are excellent insights; and they are exactly why one has to use data from the larger System of Systems (SoS) to solve the model.
The Excel solver doesn't have a capability to use SoS information/data in its solution, nor would it be capable on converging on a solution because it does not have an ability to determine quiescence.
Specifically for the 10/26/11 data set, and again just from a off-the-top-of-my-head analysis, here is an an example of a solution methodology using data from the larger SoS data set:
1.Model the 3rd isotope as Xe-133 and the 2nd as I-133, at that point the t=0 CPS of Xe-133 is likely 0. That is based on the collection method and the fact Xe-133 is a daughter product of I-133
2. Since the curves the solver is fitting to the data are exponential in nature, the "R" methodology used to fit curves should be log in nature (and not linear as was used); otherwise the solver will over emphasize data that it should not.
3. Given the greater uncertainty with the 3rd "Xe-133" isotope, use a probabilistic approach to identify a population of possible isotopes, and then search for quiescence by using a list of isotopes that fit what is expected given the possible first two isotopes.
4. One can develop the candidate population of the 3rd isotope by selecting the data end points of a Time Delta common to the 3rd isotope, and then fitting exponential curves from the +3sigma T-min endpoint to the -3sigma T-max endpoint; and then again from the -3sigma T-min endpoint to the +3sigma T-max endpoint.
On a log scale, the result is a cone of possible candidates. The 'quality' of that solution cone is driven by the ratio of the Delta T selected to the half-life of the candidate isotope.
The cone of candidates is bound by the intersection that cone with the background radiation line, and by the list of isotopes that are non-quiescent relative to overall solution (see #3 above)
In short that is what is entailed in the manual method we are using, it is possible to automate that method ( some AI would be helpful). If automated, one could also report on the certainty of the solution within the given boundary conditions.
We'll wager that if you tweak your fitting methodology to account for the exponential form of the solution; then remove the first short half life isotope from the data set by removing the first few hundred minutes from the data set; follow by setting the 3rd (now 2nd) isotope to be the daughter of the 2nd (now 1st) isotope; assume that they are not at equilibrium at t=0, hence the Xe-133 has CPS=0 at t=0(based on the collection methodology); that the solution will tend to converge on the Iodine 133 - Xenon-133 decay chain.
Be Well,
ReplyDeleteWith regard to your November 6, 2011 11:50 AM comment, the fitted exponent values are used to calculate the half lives.
The ball park half-life variability we are expecting can be determined by looking that the number of significant digits in the exponent and using +/- the rounding error to calculate the range of variability we would expect.
It is just a simple rule of thumb, mostly there to avert solving for an 'n-th' degree accuracy with n-1 'n-th' degree accuracy data.
Ms. X
ReplyDeleteWith regard to your 4 points above, here are my responses:
1. You don't seem to understand that, in fitting an exponential for the so-called "Xe133" at the end of the raw cps data, you have to extrapolate it back to the beginning of the raw cps at t=0 to be consistent. Unless the "Xe133" was somehow magically created only for the part of the curve that you are fitting. The same argument also applies for the so-called "I133" fit. They must all be assumed to coexist for the entire raw cps data set. If they don't, then a real three-isotope regression model would tell you so by giving zeroes for the amplitude coefficients, or it would fail to converge on a solution because the model is over specified.
2. You may not be aware that internally the Excel trendline code linearizes the data by taking the log of the data, just as you do to make it a straight line for plotting, so Excel can apply a LINEAR REGRESSION to it. For a linear regression R is quite valid as a correlation coefficient. So your statement that R is somehow not appropriate as a measure of model accuracy is factually incorrect. The low R value for the "Xe133" fit is Excel's way of telling you that your model is pretty crummy at predicting the outcomes of the data set, based on time and the model parameters.
3. Yes, I think this approach would work if, 1), your model was self-consistent, and 2), if the signal-to-noise was much higher for the long half-life data. Unfortunately, neither of these conditions are true at present. If you could collect the wipe sample over a much larger area, this might help with the identification.
4. Uh, I'm not sure I follow everything you are saying because you're using terms that even an old curve-fitter like me has never head before, but this sounds like a description of the regression analysis the Excel solver uses.
ReplyDeleteThe last paragraph contains so much confusion I need to respond to parts of each sentence.
"We'll wager that if you tweak your fitting methodology to account for the exponential form of the solution:...etc"
Look a few posts earlier and you will see:
N(t) = Nb + N1*EXP(-ln(2)*t/tau1) + N2*EXP(-ln(2)*t/tau2),
(Hint: EXP = exponential "e"). And it's self-consistent too, for the stated assumptions. And it is fitted using AI (the computer does it using the Excel solver; all you have to do is give it good starting values for the 5 parameters, go get a cup of coffee and voila, the answers are in front of you). I can't post graphics of my model and raw cps data directly, or I would have posted it here. But did you open the spreadsheet at the link I sent you? It's all in there and it really fits the data much better that the approach you are using. Look at it.
"then remove the first short half life isotope from the data set by removing the first few hundred minutes from the data set;...."
I don't think you can justify separating data and applying different models to it without making a lot of unsupported assumptions about was is really there in the raw cps. Be honest. The only reason you are doing this is because you want to use a simple canned Excel trendline tool and force the data to fit your tool, and then tie yourself in linguistic knots trying to justify your approach. Yes, if you really want a third isotope to be there then by all means fit three straight lines to the data, and guess what? You will get three set of numbers, albeit some of the fits are pretty crappy, but hey, we got three isotopes so we will just ignore little problems like poor correlation and large confidence limits so we can just put this stuff out there.
"The ball park half-life variability we are expecting can be determined by looking that the number of significant digits in the exponent and using +/- the rounding error to calculate the range of variability we would expect."
No, this is factually incorrect for the reasons stated above in my 11:50 AM post. You're confusing computers numerical accuracy in calculating a number (any number, really) with the statistical accuracy of fitting a curve to data with noise present. How can you get a number accurate to four decimal places when the noisy raw cps data is not accurate to one decimal place?
To be fair, you have made a good effort to improve your data taking by being fairly meticulous. However, your data analysis methodology still needs a lot of work. Putting credible, defensible error bars on the half-lives would be a good start in that direction.
http://www.scribd.com/doc/71754242/fukuxenon
ReplyDeleteBe Well,
ReplyDeleteWRS to the "magical" assumptions of item 1. It is an assumption that the Xe-133 CPS is functionally equal to 0 at T=0. If you have ever been to Disney world, it is obvious that there are some magical things in the world, however when it comes to I-133 plating out and attaching to surface particulate that's likely more chemistry than magic; the same goes for Xe-133 NOT behaving that way. That is the driving assumption for Xe-133 CPS=0 at T=0. In short, the environmental sampling methodology tends to preclude the direct capture of Xe-133. If the chemistry of that assumption is incorrect, some evidence of that would make for good quantitative input which would help set boundary conditions. However, the magnitude of its actual relevance from the larger System of Systems perspective is questionable.
WRS to 2. The initial description of your methodology indicated that you created a sum of squares column and had the solver minimize it. If the solver linearized the data for the fit was not exactly clear. Regardless, the HEART of the issue comes down to whether or not the errors are Normally distributed across both curves being fitted. The "R" methodology we suggested would tend over come those issues; a simple log linearization of the curves would not.
Moreover (and again), the goodness of the "R" value by its self is NOT necessarily a good indicator of the solution; in our case pushing for a high "R" value is possibly a sub optimization of the solution system. Mathematically this is the case because it is a continuous fit pointing at a limited discreet solution set. (see also item 3)
Simply put an R=.99 fit could exactly point to non-existent half life Isotope, moreover no other isotopes may exist in the statistical spread around that R=.99 fit; where as an R=.004 fit could point to true isotope half life and the extreme variances of that fit might point only at that one isotope. Or the "cone of candidates" could have multiple isotopes, many of which which might be ruled out via data from the larger system of systems.
WRS to 3, if the model is not self-consistent it would be helpful to point out where; it would also be helpful quantify how much Signal to Noise Ration (SNR) would be required.
Obviously, the POTRBLOG team thinks the current SNR is viable because it is a component of a larger system of systems analysis. But we have a conjecture for your statement that the SNR is not high enough: to determine the actual required SNR, one would have to build the described POTBLOG model and use it as tool solve for the required SNR for any given scenario.
Thanks your for input and hopefully your continued participation. Even though we have only had time to flush out your inputs at a "3 minute" analysis level, it has been a good quick thought exercise. We look forward to delving deeper and doing some more empirical analysis with the data using your critiques.
Be Well,
ReplyDeleteyour November 6, 2011 6:33 PM comment was caught in Blogger's spam filter and we did not see it or approve it until just now.
Fortunately most of your questions/comments are already answered in our November 6, 2011 8:39 PM comment.
However in regard to your belief that "I don't think you can justify separating data and applying different models to it without making a lot of unsupported assumptions about was is really there in the raw cps", in all actuality had we waited an additional 300-400 minutes before taking the data it would be a non-issue because the data would not be there to begin with.
In regards to the 'ball park variability', it can't be "factually incorrect" because it is our SWAG of the half life variance we are expecting; it is a self reflecting statement to give the reader some insight into our thought process. Now of course it may not be a good SWAG, but that is NOT something you can quantify with the analysis methodology you are using; specifically see our previous conjecture about quantifying the required SNR for the over all model.
The best way for POTRBLOG to convince scientists and engineers that a third, lower-amplitude, longer-lived isotope really exists is to collect a much more concentrated sample so that the signal-to-noise is the cps data is much higher than the current cps data. In science, new detection claims have to pass the "six-sigma" rule. That is, the anomaly has to be more than six standard deviations above the noise floor in the data for a positive detection. At present, POTRBOLG's third isotope detection fails the six-sigma rule.
ReplyDeleteIf scientists had to choose between two models, one simpler model that fits all the data pretty well, and a more complicated model, applied subjectively, which gives a poor prediction for the additional model parameters, they would always choose the more simpler model based on Occam's Razor. To paraphrase it, "Occam's Razor is a principle that generally recommends selecting from among competing hypotheses the one that makes the fewest new assumptions".
Be well, your last points bring up some very telling issues.
ReplyDelete(1) We at POTRBLOG are not trying convince ANYONE else, we share our information in hopes that others will critique it (or provide more data) so that we may improve our own risk mitigation strategies. In fact we have ZERO conflict of interest; can "Be Well" say the same?
(2) Your understanding of the "Six-sigma" rule is incorrect. The "Six" comes from the range of +/- 3 Sigma. The distance between -3 Sigma and +3 Sigma is 6 Sigma. Had you understood that you would have said the sample had to be +3 Sigma, not 6 as you stated. But even that is not a correct estimation of what "science" expects, at least based on NIST's standards. NIST expects a +2 CSU for a positive identification in a situation where the default presumption is NO active source. A CSU is just a SWAG at Sigma (that means it involves guesses).
Moreover in an environment where there is a KNOWN ACTIVE SOURCE (like Fukushima), as a quick rule thumb, a wise “scientist” would set the detection limit at 1 CSU. In all actuality, given an known active source of spewing out radioactive contamination, the No Detect/Detect criteria should be based on a 5% false negative rate.
Additionally, you seem to fail to realize that what you identify as the 2nd isotope is actually the third isotope. The short half life detection is most likely Pb-214 and Bi-214, which have a composite half life of 36.5 minutes
Your "simple" model did a very poor job of fitting Pb-214 and Bi-214, it did so for the reasons we pointed out in previous comments. However, that issue is solvable by either left truncating the data in the zone of influence or by cleaning the data by spirally removing a 36.5 minute source until the CPS for the longer half life isotopes can be established at t=0.
Moreover, your "simple" model is not actually mathematically simpler, namely because you are solving for a continuous solution set, whereas the true solution set is limited and discrete.
Finally, you self identify in your profile as a "Senior Scientist, Oak Ridge National Laboratory". Given the statements you have made that leads us to consider one of two things:
(1) Your work is strongly focused on a very narrow subject, and as a result you don't have full insight to the boundary condition limits of the models you are using or the broader subject area you are discussing.
(2) You are not who or what you purport to be.
In that light, we have a simple question that may help identify if you truly are a denizen of ORNL.
Who is Ed and what is his relation to Pi?
You may have to think about that question for a minute, it is set in a form so that answer is not easily Google’d.
Well, it seems that whenever POTRBLOG doesn't like the answers that people give, you attack the source. My half-life of 26 min. is pretty consistent with Pb-214 in a radon decay chain, as described by the group at Berkeley Radiological Air and Water Monitoring at http://www.nuc.berkeley.edu/node/5108 and http://www.nuc.berkeley.edu/node/5481.
ReplyDeleteOh but wait, they have also been attacked by POTRBLOG for being, of all things, incompetent, because POTRBLOG didn't like the annwers they gave either. So when POTRBLOG arguments are weak on science, attack the questioners. I'm looking forward to your next detection and I hope you publish that data set too.
If you personally feel 'attacked', please accept our apology. We have answered EVERY question you have posed and responded to EVERY critique with facts and data. However, we have not noticed the same level of reciprocity in return.
ReplyDeleteHere is a direct opportunity to show a little reciprocity by answering these questions.
Pb-214 has a 26 minute half life, how do you expect your model to identify its presence while excluding that presence of its daughter product Bi-214 which has a 19 minute half life (for a total composite 36.5 minute half life)?
Can you explain how our sample collection methodology would NOT capture Pb-214 and Bi-214 in equilibrium?
Why are you seemingly opposed to removing the affects of the Radon series from the data set?
As a "Senior Scientist" at ORNL do you believe that U.C. Berkeley BRAWM team is performing with all due diligence in its advice to the public?
Will you answer the challenge question which would tend to authenticate your Oak Ridge ties, namely
"Who is Ed and what is his relation to Pi?"
Will you state that you have no conflict of interest?
Be Well, If you are who you say you are, why not approach the issue with humility? After all, this is not some game of academic ping pong; this is a real world event where people's health and lives are at risk.
I'm sad to see the hurt feelings further down the chain, but I appreciated this discussion. Some of the points Be Well raised have been concerns of mine as well. I will take some more time and continue to consider what my opinions are.
ReplyDeleteBe Well, I look forward to taking a look at your model. And Ms. X, may I say again that I really appreciate your citizen-scientist efforts.
Maybe collaboration rather than strife is in order. Be Well, if you had a sample of fresh rainwater from St. Louis, couldn't you find a facility at ORNL to test it? Maybe there's even a clever way to suss out very dilute contamination levels using the SNS.
This method of decay counts is useful and interesting, but a spectrographic result would be convincing.
Best regards,
Aaron Datesman
Assistant Scientist
Argonne National Laboratory
Aaron Datesman,
ReplyDeleteThe issues of concern are valuable; more data will be forth coming for interrogation. One thing that has so far been lost in this discussion is the long half life fallout measured past the 4000 minute mark.
And while I can certainly understand the desire for "convincing", our threshold has been "prudence" from a cost effective risk mitigation perspective.
Aaron,
ReplyDeleteThe model is in the November 5, 2011 12:56 PM post. However, it ignores chain isotope decay. I'm working on a simple two-isostope decay chain model right now and should have some results by this weekend. As far as testing goes, most of the short-lived isotopes will be gone after a day or two of shipping, so the gamma ray spectroscopy can be more accurately done locally, at University of Missouri perhaps? I don't have access to ORNL equipment, the paperwork required for outside work is budensome, and it will be prohibitively expensive for individuals.
Be Well -
ReplyDeleteI think I had to install something on my computer to get the file, but I had other windows open in the browser, so I didn't do it yet. I will. I'm quite curious about this.
Aren't there techniques other than gamma ray spectroscopy? Surely there's a way to trick the Department of Energy into doing this analysis using world-class equipment. It's just rain water, after all....
In looking around the net it seems that gamma ray spectroscopy is the gold standard. I have a friend in the Nuclear Science and Technology Division. I will see what I can do, nevertheless I still think we need a local collection followed by a prompt analysis, either here in Oak Ridge or Saint Louis.
ReplyDeleteBe Well & Aaron Datesman, that is EXACTLY THE RUB. There are several groups in the immediate vicinity of Saint Louis who could and should do the collection and testing all by themselves; yet it is up to some unknown POTRBLOG site to try much using little.
ReplyDeleteI think the issue comes down to two things.
(1) The disaster does not fit the detection (or risk abatement) criteria/protocols for detecting illicit nuclear weapons activity/fallout.
(2) People are afraid of what they might find.
Aaron,
ReplyDeleteThe second model derived for a two-isotope chain decay is:
N(t)=Nb-(N1*tau1/(tau2-tau1))*EXP(-ln2*t/tau1) +(N2+N1*tau2/(tau2-tau1))*EXP(-ln2*t/tau2)
where the background count rate is Nb, the decay rates for isotopes 1 and 2 are N1 and N2, respectively, and the half-lives for isotopes 1 and 2 are tau1 and tau2, respectively, and t is the time in hours. I assume that N2 is a decay product of N1. Unfortunately this model is not consistent with the data. The first model where N1 and N2 are independent of one another fits the data really well. It can be found at http://www.4shared.com/folder/AZ_9jUMG/_online.html
Be Well, Your input has been very helpful. The 2nd deeper analytical pass into your critique has us looking at the "orphaned" short half life trend line in the charts shown at the top of the blogpost. More to follow.
ReplyDeleteI would like to summarize my conclusions so far on the Potrblog Saint Louis 10/26/11 detection. A two-isotope model (with independent isotopes) fits the cps data quite well with R^2 ~ 0.985. The initial decay due to isotope 1 is ~ 25.6 ± 5 minutes and the secondary decay due to isotope 2 is ~ 10.2 ± 2 hours. These error limits are somewhat tentative, and if I come up with better limits I will post them here. No third longer-lived isotope is seen, of that I am sure. The initial cps count rates at t = 0 min. are ~ 3.6, 1.4 and 0.49 for isotope 1, isotope 2 and background, respectively. The analysis is posted at:
ReplyDeletehttp://www.4shared.com/folder/AZ_9jUMG/_online.html
In addition, I have attempted to model the 10/26/11 data with a simple two-isotope model where both isotopes are part of a decay chain (isotopes are dependent). However this model does not fit the observed cps data, so isotopes 1 and 2 are probably independent.
The ~ 25.6 ± 5-minute decay is strongly suggestive of a Uranium238 decay chain with Radon 222 decaying to Pb214 (26.8 min. half-life), and Bi214 (20 min. half-life). These are the main contributions to the early cps data via beta emission. The combined average decay time of both isotopes is consistent with the models first isotope. Radon decay is a significant part of the normal terrestrial background radiation and has been around for billions of years. Not to worry.
However, the 10.2 ± 2-hour decay is not consistent with a radon decay chain since after Bi214 decays, the next observable (beta emitter) isotope would be Pb210 with a 22-year half-life. This decay chain is shown at:
http://upload.wikimedia.org/wikipedia/commons/a/a1/Decay_chain%284n%2B2%2C_Uranium_series%29.PNG
At the moment I do not feel comfortable speculating about the isotope(s) responsible for the 10.2 ± 2 -hour half-life. I'm still researching that issue. In future detections, identification of the second isotope decay would be made easier if no gaps existed it the cps data. I understand that the gaps were caused by the need to perform background calibrations, however the background appears to be quite constant in time for the 10/26/11 cps data. I should think it would be fairly constant for future detections as well. I would recommend that an initial background be measured in anticipation of a rain event, then a background be measured after the 7.6-day cps data run, in this way all data during the entire run is available.
Be Well, Thanks for putting in the effort, quantitative feedback is always appreciated; it has been very helpful in identifying areas needing refinement.
ReplyDeleteHowever, don't be so quick to count out the long half life component, we have found at least one sample to still be radioactive 6 CPM above background almost 30 days after the original reading. See
http://pissinontheroses.blogspot.com/2011/10/maximum-alert-persistent-long-half-life.html
We will test the 10/26/11 sample again when time and equipment free up some availability.
In the mean time, we have developed a few "control" test scenarios that mimic the 10/26/11 detections, and hope to have that data available shortly.
Be Well, thank you for making us revisit our quick analysis methodology. It is clear to us that under certain circumstances there is a greater level of uncertainty than we used as our rule of thumb; the issue is directly tied to the circumstances of when/if it is best to clean the data by subtracting out the background noise. The issue is made more complex given that our background measurements are conservative relative to the sampling data, and may show some influence from the sample itself.
ReplyDeleteAt the moment, the uncertainty is not great enough to alter our risk matrix criteria, however as we build a larger population of detection data those circumstances may change. Additionally, we plan on improving our shielding and sample/background setup to resolve any induced ambiguity.
Hopefully, you have been able to go through the synthetic data we generate and can now better see why we chose a SoS approach to develop prudent risk mitigation criteria. We plan on releasing a third set of synthetic detection data that duplicates the I-133 and Xe-133 curve by filling the gap of the "orphan" trend line we showed in the original chart; it should generate some interesting discussion.
And again, thanks for your input.
Ms. X
ReplyDeleteUncorrelated counter noise cannot be subtracted out as I told you earlier. All it does is increase the noise. Averaging counter cps data is the best way to reduce noise. For that reason your synthetic data is worse than the raw data. That is why I choose not to fool with it.
Now that's a humorous excuse!
ReplyDelete