Light can act as both a wave and a particle. The new equation shows light as a wave and particle.
Let's do some "thought experiments"
Take a train going down a valley and set up sensors in a circle. Each sensor station has two sensor sets, one to measure the speed of the light coming from the headlight, the other to measure the speed of the train.
Every sensor station, except two, show that the speed of light is a constant no matter how fast the train is going.
The sensor looking at the caboose sees the speed of the train, but can't see the headlight.
The sensors looking directly at the train sees the train and headlight and measures the speed of both, except at time 0/0 when the train hits the sensors and destroys it, resulting in no measurement.
Change the train to a bike with a headlight. Take the same measurement. At the rear, no light is seen and no speed difference can be measured. At the front, at time 0/0, the bike hits the sensor and tips over.
Put a real tiny headlight on a gnat. A sensor pointing at the gnat's ass shows no reading. There is also no reading at time 0/0 since a gnat has more sense than to crash into a sensor.
Silly, maybe, but real measurements are real measurements and should not be ignored just because they don't fit the preconceived notions ( the Galileo "defense").
The same type of analysis works for modern scientific records. Everything is tweaked to fit standard curves, best straight lines or bell curves. Readings beyond 3 standard deviations (3SD) are usually simply ignored.
Computer recording systems may capture all of the raw data, but routinely discard discrepant data that doesn't fit the assumptions.
If you look at a set of raw data, before it is adjusted, you may find some data points that are entirely out of true. These would typically be written off as spikes in the data.
The argument can be made that these are not real data points but errors caused by having the gain too high, bad sampling, bumping a machine or gamma rays hitting a sensor or some other error factor. All of these arguments are valid.
When a test is re-run, the discrepants may or may not show up a 2nd time, but after 5 or 10 iterations, it is quite likely that discrepants will re-appear, not necessarily in the same place, but as recordable information.
This begs the question, even if these are sampling errors: What are the data points saying to the researcher?
By accepting the discrepants and trying to fit them into a straight line or best fit, the lines become circular, spinning around.
What is the mathematical formula to describe the new data? I have no idea (math tends to cause problems in my life).
The modified Einstein equation e=m(c does describe the output, but the derivation of the formula is yet to be determined.