How to read scientific paper 4.3

Artist's_impression_of_the_central_bulge_of_the_Milky_Way
This artist’s impression shows how the Milky Way galaxy would look seen from almost edge on and from a very different perspective than we get from the Earth. The central bulge shows up as a peanut shaped glowing ball of stars and the spiral arms and their associated dust clouds form a narrow band.

This post is a conclusion of the last step in reading the papers, reading the whole paper. I chose an article about Discovery of Gamma-Ray Emission from the X shaped Bulge of the Milky Way. You can find a full article for free at arxiv.org. And if you feeling posh, you can also buy the version that was published in Nature.
Here is a ‘triage’ average grad student uses when reading scientific papers, and I follow that ‘triage’ here. Links will lead you to the posts that covered previous steps.

1. Read an abstract.
2. Look at the images and read captions
3. Read conclusion/summary
4 Read the paper in details. part I , part II, part III

Appendix – Templates

This part contains the details of the data preparation and processing. Templates are used in the modeling of gas and dust to remove it from the dataset. The better the model the better is the removal of the main overwhelming signal so that the residual signal can be correctly analyzed. Remember, this paper is about the signal that is hidden inside the main signal. Like trying to see your heartbeat in the movement of your skin. (This is actually possible to do, analyzing the video of a human.)
In this part of the appendix, the authors explain in detail which models and templates they used to approximate the bulk of the signal. They wrote down all assumptions they used in their modeling and show more images that demonstrate the effectiveness of their particular model and templates they used. And such info will help anyone who wishes to repeat their research to do so. Explanations are detailed and easy to follow. Now even you would be able to reconstruct what and how they did.
They also explained why interpolation approach is not good enough for them. Honestly, I prefer to avoid interpolation if I have any other option myself. Interpolation is more-less just inventing a value that is more-less similar to the values that surround the spot you wish to interpolate. In essence application of pure statistic.
Do not get me wrong, this method is perfect if you do not have any knowledge about the equations that govern the values you’re trying to interpolate. However, in physics scientists prefer to use true and tested complicated equations that actually describe behavior. Why interpolate gravity value when we know exactly how to get that value from the equation?
In science today, the equations used in modeling are really complicated. They rely on math that is not even taught to a majority of people. Only mathematicians, physicists, and some engineers can bump into such math. So authors used what they call hydrodynamic approach. (Now you know where to start with a search for the correct math. Good luck). And from the accompanying figure 3, you can see that this approach provides better resolution. There are more details visible in the panels obtained with the hydrodynamic approach.
They applied the similar methodology for modeling dust. And again, there are more details in hydrodynamic approach. But you’ll notice not really much.
My guess is they had to go with the hydrodynamic approach because the signal they are trying to extract is so deeply buried into stronger signals, that every little helps.

But those were not only two types of templates they tried. They also mention Inverse Compton Emission Template and Loop I Template.
What I found cute is that they also had to model how Sun and Moon are changing observed emission. As someone who mostly studied the Sun, I found the reversal grin-inducing. During my own work I had to carefully remove the effect of the cosmic radiation on some of my datasets, mostly because I also was detecting and analyzing the weak signal and small-scale dynamic, so cosmic radiation, i.e. noise, was a pain in the behind for me.

Appendix – New Templates

Now we’re getting into a detailed discussion of the novelty approach used in this research. Authors explain how did they make the template for the X and nuclear bulge. The details are quite precise and accompanied with quite interesting images, Fig 6 and 7.
Then we’re getting to the analytics methods behind the main result of the paper. The authors explain in details what did they used to search for dark matter, and what happened when they used what they call a canonical value of a variable.
Each area has such canonical values. A long time ago, a value is established as correct and then simply used. The explanation is different for each value, but mostly, there is a good mathematical reason behind that value that is confirmed by a significant number of experiments. You will have to dig through the references till you bump into more details.

Appendix – Alternative Explanations

Remember when I wrote down that quality of the paper is easily judged by how many alternative explanations are offered?
Not surprisingly, this paper has the whole section named alternative explanation.
First offered, Fermi Bubbles, made me google to figure out what the heck are those. And seems like our galaxy has two. In this part of the paper, authors explain in details why the Fermi Bubbles are not IT.
The second offered explanation is Galactic Ridge Cosmic Rays seems a way better candidate for alternate explanation (i.e. confusion).

Appendix – Bin-by-Bin Analysis

Here authors explain exactly what the bin-by-bin analysis is. The explanation produced aha feeling with me. Yes, the method they used is quite similar to something I used in my own research. Moreover, I could see that author fought with the same juggling of the approaching to the limits of the resolution as close as possible.
My work targeted frequency analysis, and boy among my peers there was a fierce discussion about what is the minimal number of the points per wave you need to actually detect that particular wave. It is a fine line to walk.
The new results are hidden in that narrow part revealed with the new instrument’s increased resolution, and of course, one would like to push to the very limit in hope that one will be that scientist who gets that break-through research result. In essence, the process is a constant struggle against the human nature and our tendency to jump to conclusions.
The first instance of such jump-to-conclusions error I met during my doctoral studies. The student before me, who even published her results in peer-reviewed journal did something to her data that gave her completely different results than the one I. She detected energy levels I never found in my datasets. My result was confirmed by other scientists from the other, independent institutes. No one, ever, found the energy she did. My guess is she made some error in the data preparation, or during the observations itself. But what really happened we will never know because she left science shortly after she got her Ph.D.
And honestly, among scientists, no one cares, no one besides me even remembers her wrong result. I remember only because my own thesis advisor was making me problems since I could not find the same energy she did (i.e. make the same error). Ah well, as I said, scientists are just humans too.

Appendix – Comparing Hydrodynamic and interpolated Gas Templates

Here you can read about how did authors pick their templates, what tests they performed before picking.
As I said, each time some new part of the analysis is about to be used, the tests have to be performed to validate this technique. Only if the tests show that the technique works and performs satisfactory, only then one can apply the same technique to the new data sets, to the new analysis.

Appendix – Point source Search

Here they explained how did they searched for the point sources and combined this search with the previous analysis techniques. The explanation is detailed, including the know-how, assumptions, and explanation why something is done and why some things were not done. The limits used in the analysis are also explained, and explanations why those limits were introduced.

Appendix – Search for extended Emission

This part is really the most convincing. The authors explain what they did to achieve the results of the previous studies, to simulate ‘detection’ of the dark matter.
But of course, when they actually applied their new methods and techniques, the dark matter signal fell below the significance level.

Appendix – Determining the X-bulge Contribution to the resolved MSPs

And in this last part, the authors explained how did they model the X-bulge contribution.

The rest

Below, you can find few more figures and tables that accompany this part. And each and every one of them makes this paper and its conclusion stronger.

In the end, when reading the paper pay attention how detailed explanation of the methodology behind results is, and how well authors presented the possible alternative explanation of their results. The pseudo-science papers usually have serious problems with this two points. Especially the alternative explanations parts.

STAY SMART

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s