Saturday, July 6, 2013

Measuring or Are We There Yet?

I was going to do a post on Kalman Filters, and how great they are. They are great, and completely useful. A Kalman filter will let us take a bunch of error out of a collection of samples and make something useful out of the information. I was reading an article by Jack Crenshaw about Kalman filters, and he said something very profound:

 Whenever I measure any real-world parameter, there is one thing I can be sure of: the value I read off my meter is almost certainly not the actual value of that parameter.


Jack get things. He worked on getting the Apollo spaceships to the moon with little or no computers. He did most of the work prior to Kennedy saying the US will go to the moon. He was working on trajectories and such in the 1958-1960 time frame. He can present the most complex bit of math in ways that almost anyone can understand if they are willing to read what he says.

Where is got me thinking, though, is we need data to do stuff. When we get a location fix off a GPS for instance, are we sure we are there? My phone and tablet all give a DOP value, Dilution Of Precision. The DOP is the current inaccuracy of the reading I have displayed. It  involved the quality of the data we are getting from the satellites right now.

When we look at the instruments in an aircraft, do we take them as gospel? If you are like me, the altimeter only shows 100ft accurately, so I don't try to interpolate to the nearest foot. The value I can read is good enough. Same with the airspeed, I don't really care if I am going 130kts or 131kts, as long as I am maintaining a margin over stall speed.

How do you convince a computer to say close enough. Computers are a big challenge. The computers measure to the nearest bit, what ever that is measuring. If the sensor is outputting perfect information, and the analog to digital converter is converting this perfect data to bits, life is good. The trouble is, the sensors may not return perfect information, or the A/D converter may have some non-linearity, or there whole system may introduce some noise.

All the data the computer can get is the data the sensors are measuring. It probably isn't exactly the value that is real right now, but it is probably close enough. How can we be sure the values are reasonable? We can correlate them to recent events.

If we have a temperature sensor, and we are reading 100 degrees for example, is this right? We can take the most recent 10 or 100 samples, and see if there is a significant change. If the last 100 samples where 45 degrees, we might question the 100 degree sample. If 50 of the last 100 samples were between 90 and 98 degrees, and the other 50 of the last 100 samples were between 102 and 110 degrees then we should keep this sample, and say it is reasonable. However if we are seeing a trend, and the 100 previous samples were showing a steady climb from 90 degrees to 110 and this sample came in at 100, we might again question it.

We need to consider the source as well. If there is a pressure sensor acting as a pitot sensor, measuring airspeed, and it is reading about 180kts. If the indicated pressure increases relative to the static pressure, as the aircraft climbs, there may be a blockage. The trapped air will maintain the surface pressure, but the static pressure will decrease, making the aircraft seem to accelerate as it is climbing. Accidents have been caused by the pitot tube being blocked.

All measuring devices have inherent limitations. Buying one ruler from one manufacturer will probably show differences in another manufacturers ruler. Electronic sensors coming out of the same factory will have slight differences device to device.

Which answer is right? It probably doesn't matter. We just need to be good enough for the situation we are in. We can use math to make questionable information accurate information.


  

No comments:

Post a Comment