This has happened before, and it will happen again.Ĭonventional thinking, before the LHC and still today, is that new phenomena will likely appear in the form of particles with large rest masses. In an immense data set, unexpected phenomena can hide, and if you ask the wrong questions, they may not reveal themselves. Isn’t it obvious that there is no chance whatsoever of finding something new with just 1% of the data, since the experimenters have had years to look through much larger data sets? And on top of that, we only had access to 1% of the data that CMS has collected. Why, as theorists, would we attempt to take on the role of our experimental colleagues - to try on our own to analyze the extremely complex and challenging data from the LHC? We’re by no means experts in data analysis, and we were very slow at it. In the second half, I’ll add some comments for my expert colleagues that may be useful in understanding and appreciating some of our results. The first half of this post will be appropriate for any reader who has been following particle physics as a spectator sport, or in some similar vein. Today I’m going to explain what we did, why we did it, and what was unconventional about our search strategy. In this project, we looked for new particles at the Large Hadron Collider (LHC) in a novel way, in two senses. A few days ago I wrote a quick summary of a project that we just completed (and you may find it helpful to read that post first).
0 Comments
Leave a Reply. |