I am using an analytical balance for gravimetric analysis of samples.
At the beginning of the day, I weigh initial weights of my dust collectors, plus a field blank and "weighing scale blank." The "scale blank" is then left sealed beside the analytical balance.
Ideally, the "scale blank" should have no weight difference since it is sealed and equilibrated to the same conditions that the balance is subjected to.
However, when I measure the weight of the "scale blank" after 12 hours during post weighing of the samples, there is usually a weight difference between the initial and post weighing of the "scale blank."
My question is: how does this weight difference affect the post weights of all my samples? Is it reported as a precision issue or do I subtract/add the wight difference of the "scale blank" to the post weights of the samples?
Thanks in advance for your help.
Wyatt Harris
What kind of model do you use?
Evan Nelson
It's pic related. Do you need a brand or any more info?
Dylan Jones
Nah. Was just wondering if it was a Mettler.
Oliver Reed
Yeah. Mettler Toledo.
Jonathan Baker
congratulations, you've constructed a moon tracker.
Dylan Morris
Not my field....
Do all your containers weigh the same?
If your scale is weighing the same object differently throughout the day, you should check your scale for a malfunction.
You should not change the values that the scale produces due to experimental error. Error correction is death. Just change the precision, or get a better scale.
And I don't understand this "blank" business. Tare weight is container specific, unless the variance in your containers is less than the stated precession of the scale or error of the samples.
www.nist.gov/pml/wmd/pubs/upload/hb44-05-all.pdf
Christopher Richardson
Thanks. Simply stated, the scale is measuring the same sample differently at different periods. So if I don't adjust the data, how do I reflect this anomaly into the reporting of the data?
Carson Rodriguez
How often is the balance calibrated? What are the precision and uncertainty values? Have you tried weighing the same thing three times in succession to see what effect plate position has?
Blake King
Again; I don't do metrology. To determine sample error, you use different scales and do an average error, or use a standard and take measurements over time on the same scale, but my inclination is to use equipment that is calibrated for the sample size you are measuring, and then there is no equipment error.
If you can’t get a calibration standard, I would measure your “blank” on different scales to make sure it is not just a bad scale, but I don't do environmental science, so I'd ask whoever you are doing the work for what their procedure is.
Bottom line: the assumption is that things don't change weight unless something is added or subtracted, so that leaves experimental error, which is ever-present, so you use equipment that measures within your acceptable error for your hypothesis.
Jason Russell
better idea:
weigh your scale blank at multiple times during the day and fit a spline to the curve, then adjust your sample measurements by that function
Luke Hill
Calibration is yearly. Precision and uncertainty: will have to check on this. Haven't done so.
But supposing you didn't find plate position to have an effect, how is this difference in weight of the same material at different periods reported?
Thanks again. Actually, using multiple scales is a no-no.
Good idea. However, is this acceptable practice in metrology?
Colton Jenkins
I don't agree. Look. Scales get abused; people bump them, drop them, spill shit in them. They are a lot more complicated than you'd believe. Their failure does not fit any curve. Either get a different scale, or a more precise scale, but, unless the precision is not needed and you can just round off, fucking with the data is a no no.
Even so, if it measures to the penny and it is off by a nickle, why would you trust it to a dollar? A bad measurement is a bad measurement, and fixing it in the data presentation is not science: it is politics.
Christian Gray
no, it's not. it's a hackjob.
what you can do is learn about significant figures and stop recording things you don't understand.
you can do it if you know what causes the effect and can reproducibly predict it, and also understand what effect it has on your samples.
Lucas Cox
to follow that up, do you have a hypothesis as to what causes it?
Gavin Lewis
An analytical balance should be calibration checked with a minimum of three standards at least twice a day. Check out the USP standards. If you have balance uncertainty it can become your error. If you have plate uncertainty you can work to minimise it by placing your sample in as close to the same position in the same orientation each time.
Andrew Brooks
Using different scales is only a laboratory no-no if you measure different samples on different scales; not if you measure all samples on different scales.
Calibration does not imply function. Intermittent failure can be missed in calibration, but certainly shows up when the tech (you) gets different results for the same measurement at different times.
If you can't find a standard reliability range for the scale by decreasing the precision for the experiment, and cannot measure the error with a trusted standard and get the same reading or measure all samples on different scales to get an experimental average, I'm afraid I know of no way to link the measurement of the possible mechanical failure of the scale to your data, except to say the scale doesn't work.
Things don't fail predictably. By using different equipment, you can separate the noise of standard measurement, but this assumes all the equipment is working. This error is a Common Cause of Variation. Equipment failure is a Special Cause of Variation, and has a different standard deviation than Common Causes that cannot be corrected for.
Owen Collins
Hypothesis is equipment precision problem, but I have yet to check if it is within acceptable range.
What I meant with "how to report it" was do i report that there is a +-% of error based on those differences?
In addition, in some days there is no dfiference, but in some days there are.
Luke Wood
>Check out the USP standards Ok will do
>If you have balance uncertainty How do I treat this uncertainty? Do I reflect it as a % error in the data of my samples?
Leo Parker
>Hypothesis is equipment precision problem >In addition, in some days there is no dfiference, but in some days there are
so you think it is it an equipment problem or an environmental problem?
what the fuck kind of mickey mouse scientist are you?
Thomas Barnes
>Using different scales is only a laboratory no-no if you measure different samples on different scales; not if you measure all samples on different scales. Laboratory instructions explicitly state not to use other scales. You just use one scale althroughout to improve reliability (consistency) of data. Sorry, but I'll have to disagree with you here.
But if your purpose is just to see if the anomaly occurs on different scales as well, then I get the point.
Again, your inputs are highly appreciated. I will look further into this based on what you said. Thanks.
Camden Evans
This is wordy.
If you have evidence that the scale doesn't work, your data is compromised. There isn't any correction you can make because equipment failure is not equipment noise, and hasn't any statistically acceptable correction.
Matthew Baker
>what the fuck kind of mickey mouse scientist are you? ez
Can;y be an environmental problem because the material is equilibrated to the room conditions of the scale. The room conditions virtually is consistent. Only thing left is an equipment problem. The only thing i'm interested in is what is the appropriate way of reporting the anomaly when presenting the data?
Christian Anderson
>fucking with the data is a no no. no, fucking with the data without telling anyone is a no no
if you're explicit about the transformations you've performed and you can justify the transformations, fuck with the data all you want
Connor Gray
Thanks. Yeah it kinda is wordy.
So do you think this can be considered as equipment failure? The difference in weight is very small, but is against the ideal that there should be zero difference.
Anyway, I do not mean to make corrections to the data. I was just wondering if there was a best practice of reporting this properly when presenting the data, say +-% error or something?
Gavin Gutierrez
your equipment is being affected by gravitational disturbances caused by the moon
the greatest error should occur when the moon is on your opposite side of the earth
after you verify my hypothesis and publish your findings tell everyone the time wizard sent you on your mission
Brody Phillips
Underrated post
Isaiah Miller
I'd calculate measurement uncertainty and add it to the report.