Thursday, February 9, 2017

The Limits Of Scientific Inquiry & Research (1)


An unnerving item that appeared not long ago referenced that only a handful of people read nearly half of all scientific papers.  One estimate is that  1.8 million articles  are published each year, in about 28,000 journals. This elicits the question: who actually reads those papers?

According to one 2007 study, half of academic papers are read only by their authors and journal editors. This is nothing short of astounding but makes sense if one gets inside what research scientists actually do, which is to narrow their focus almost exclusively to their specific specialty areas  In addition, specialty areas themselves can be reduced to sub specialties, for example solar physics has a sub-discipline of solar flare magneto-hydrodynamics, another of solar flare predictions. Perhaps the former comprises 5 percent of solar physics and the latter 2 percent. Take the overlap between them and you may only find 50 solar physicists. Will they read beyond the 250 papers published annually in those two areas? Doubtful!  There simply isn't the time to do so, in addition to lecturing, attending conferences to present papers, etc.

Even more sobering, the authors found that 90 percent of papers published are never even cited by other researchers. Again, this isn't surprising. and let's also bear in mind that papers are of unequal quality - much of the lesser quality due to the "publish or perish" pressure. This mandate means professors will churn out papers merely to meet the quota mandates of their institutions, and may not have anything genuinely novel to report. Hence, the low quality.

This brings us to the other aspect: what methods are used by the researchers? Not only in astrophysics but physics as a whole, biology and chemistry.  If then academics are not clear on their methods, or what justifies them, then we are bound to see a wide array of quality. This is often because of the peer review process itself, see e.g.

https://newrepublic.com/article/135921/science-suffering-peer-reviews-big-problems

And the fact not all journals and peer-review processes are created equally. All of this evokes questions of how rigorous scientific methods are applied to any scientific discipline and whether a specific reliable method exists in the first place.

According to biologist Lewis Wolpert, writing in his book 'The Unnatural Nature Of Science' (1992):

"It is doubtful there is a scientific method except in very broad and general terms".

Some of the advice referenced by Wolpert distilled from practicing scientists includes: "never try to solve a problem unless you can guess the answer, seek simplicity, seek beauty".

All of which are fine aspirations, but let's face it, rarely achieved in doing real science - say in stochastic realms like climate science and solar flare physics.

He follows these examples by reinforcing his original claim (ibid.):

"No one method, no paradigm, will capture the process of science. There is no such thing as the scientific method".

Which is a statement very likely to shock most high school science students, who will swear their teachers have shown them exactly what the scientific method entails, viz.

- Define the problem in respect to the specific phenomenon being investigated

- Gather information, data related to the problem

- Observe the phenomenon related to the problem

- Formulate a hypothesis related to the phenomenon and problem

- From the hypothesis make predictions that can be tested

-  Observe and/or experiment to test the hypothesis

Of course, in real life there is no such set formulaic pattern. Indeed, one can point to discoveries in the history of science (e.g. black holes such as Cygnus X-1) and show that scientists are led by theory almost as often as they are led by the observations.  There are also many permutations which led Wolpert to make the claim that "there is no such thing as the scientific method."

The single template presentation  - such as presented in high school science teaching- is a myth, an artificial scheme, rarely followed. It is only used for pedagogic purposes and convenience.. The fact is, when one reads most scientific papers - including in physics and astrophysics - there are large assumptions made and no particular method has been followed other than perhaps to gather the data as methodically as one can, usually by painstaking observations or experiments, and subject it to critical statistical or mathematical analysis. The latter usually entails the development of a consistent mathematical model.

However, and here is the kicker - depending on one's overarching choice of underlying philosophy, there may be no connection of the model to physical reality. Among the philosophies of science most powerful right now are positivism and reductionism. In the first case, one does not expect coincidence with physical reality but rather that the model "works" mathematically. To quote Stephen Hawking from his hypothesis for quantum coherence ('The Nature of Space and Time', p.3-4):

"I take the positivist viewpoint that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All one can ask is that its predictions should be in agreement with observation."

The original positivist emphasis in physics perhaps appeared in the quantum theory first, with its Copenhagen Interpretation. In this interpretation, the wave function Y  is purely statistical in nature, following the original proposal of Max Born. We cannot use it to effect specific computation concerning individual particles of a quantum system but are constrained to work only with collective properties.

In Born's view, the presence of i (the imaginary number i = Ö(-1)) in the quantum wave function divested it of any real, physical significance. A wave with an imaginary number is an abstraction - nothing more. In Born's statistical interpretation, it was useful for describing the changing probability of finding a micro-system in a particular quantum state. However, one was sensible enough not to take it literally, as a physically real entity.  Thus, the positivist interpretation or CI which still holds sway for most physicists.

Quantum theory accurately predicts the wavelengths of spectral lines, e.g. for hydrogen and other  spectra:









so whether it "really"  embodies physical reality is less important than that its observational predictions can be verified.

Reductionism is also a recurring and mesmerizing motif in most physics approaches, though it has also now diffused to other scientific disciplines. The pinnacle of reductionism, not surprisingly, was achieved in classical physics, based on Newton’s laws of motion. Its quintessential form, Mechanics, ultimately became the model for all of science by virtue of its sophisticated and elegant mathematics. Its power, particularly to make predictions, was so beguiling that it led to an entire mechanical paradigm for the universe. 

An illustration can help to clarify why mechanical reductionism became so beguiling to later scientific minds. One of the more interesting systems is the Attwood machine in which we have two masses suspended at opposite ends of a string on a single pulley, but with one mass exceeding the other, e.g.  m2 > m1. Then, when set up, the masses will accelerate in the direction of the greater mass, since an unbalanced force is now acting, e.g.


 A typical  problem for this simple pulley system is to find the acceleration resulting. Since m2 > m1 the acceleration is in the direction of  m2, and using two free body force diagrams (right side of graphic)  we may write (since T1 = T2 = T) 2 separate equations of motion:


m2 a = m2g – T


m1a = – m1 g + T


Adding the two equations:    a(m1 + m2) = (m2 – m1)g

Hence:  a =    (m2 – m1)g / ( m1 + m2)

Thus, if we take m1 = 0.5 kg, m1 = 1.0 kg and g = 10 N/kg  then:

a = (1.0 - 0.5)kg (10 N/kg)/  (1.0 kg + 0.5 kg) =  5 N/kg / (1.5) = 3.33 N/kg

Perhaps the single comment that embodies both positivism and ardent reductionism comes via Victor Stenger’s  attempt  to dash the validity of real quantum nonlocality (cf. God and the Folly of Faith, 155) :

It does not matter whether you are trying to measure a particle property or a wave property. You always measure particles. Here is the point that most people fail to understand: Quantum mechanics is just a statistical theory like statistical mechanics, fundamentally reducible to particle behavior


The biggest contradiction to Stenger’s interpretation comes by way of.  J.S Bell ( Foundations of Physics, (12, 989 ):

Although Y is a real field it does not show up immediately in the results of a ‘single measurement’, but only in the statistics of many such results. It is the de Broglie –Bohm variable X that shows up immediately each time.

These examples are meant to show why there is so much disagreement among physicists to account for differing observed phenomena.  For example, this is manifested to an extreme degree in the recent controversy over quantum "qbism", e.g.

This underscores a point made by Julian Baggini in his book, The Edge of Reason ('Chapter Two, 'Science for Humans',):

"The quirks and deviations from the official version of pure experiment and deduction are deeply embedded in the way science works".

Something that can be easily gleaned just by referring to the link above. Or, one can get hold of any number of different scientific papers, say from different journals. Is there one method of approach? No there is not.   If I had one major gripe about discussing scientific methods, approaches it is in ignoring that ultimately scientific advance is predicated on successive approximations. If high school (or even university) students learn anything, it is to master this subtle principle.

You have data, and accessory information which leads to some initial result (observation, experimental outcome)  which tests a particular hypothesis- call it 'x'. You then acquire better data (perhaps because of refined instruments, techniques ) and are led to a modified (improved) result such that:

x (n+1) = x + P(x) where x(n +1)

denotes an improvement via iteration, with P(x) the process (acting on x) that allows it.

Later, more refined data become available, such that:

x(n + 2) = x(n + 1) + P'(x + 1) and so on, and so on and so forth.

Each x, x(n+1), x(n+2) etc. being a successive approximation to what the objective, genuine value should be.

Beyond this, one must grasp different approaches are dictated depending on the discipline or sub-discipline and their respective empirical limits. Take the cases of the solar physicist and the lab plasma physicist. The solar physicist - with few exceptions, e.g.

is perforce afforded a more passive role than his lab plasma physicists counterpart. And while the solar physicist can apply plasma physics principles to his object of inquiry, say solar flare plasma of differing polarity deduced from  vector magnetograms.

he cannot extract that flare plasma and subject it to experimental or diagnostic tests - say to check the Debye length of the plasma.  In effect, solar physics differs from lab plasma physics in the respective degrees of control which can be exerted on the lab and solar plasma. This then must be understood from the outset in order to fully appreciate the methodology applied in each discipline.

Even so, there are huge assumptions often made before the investigator in either discipline can proceed. That also often means skating over "outliers" in data, and ignoring minor discrepancies in order to arrive at a larger, overarching picture more amenable to publication. We will get into some of these in the next part.


No comments: