I failed my PhD first time around, with the external examiner recommending I not even go to the viva stage, saying that the results were obvious and trivial, not worthy of a PhD. He recommended I be given a Masters and not even permitted to re-write or re-submit.
My internal examiner insisted we go to viva, wherein I managed to convince them (unaware of all the above) that the results were surprising, interesting, novel, and worth a PhD.
The problem was that in the first version I explained far too clearly all my thinking, and how I came to the results. As a result, it all seemed obvious and inevitable.
To publish, results need to be novel, surprising, or otherwise "worthy", and if you explain too clearly how you got there, your results can easily be dismissed as "trivial".
I clearly despise academics (and the papers they write) who think that everything should look novel by hiding your thought process.
Sadly, especially in fields where conferences are favored over journals, this seems to be common: you only have a single chance of impressing the reviewer, so you lose little by hiding your thought process, and if possible make your result look more complicated that it actually is.
Many papers I read look needlessly complicated, and I cannot help but wonder why the authors chose this style. I've already been rejected because reviewers found my contributions too simple, however I'd say that the majority prefer to highlight the fact that the paper is "easy to understand". (Of course, sometimes you also need to get told that your contributions are minimal).
Sometimes, I wonder if I have this point of view because some academics like to hide their simple result under a mountain of useless rigor, or because I'm not smart enough to comprehend the necessity of the above rigor and I'm just a dumb engineer who chanced into research. I'd say both are true, but usually not at the same time: some papers are clearly hiding a small contributions, while some are just above me.
Physics has the same strategy: Important and timely results appear in the four-page (now four-page-ish) PRL and the long-term paper wrapping it up appears in PRD (or PRA, PRB, PRC, or PRE).
Only drawback is that people primarily read PRL, so people try to squash long-form results into PRL alone.
That sounds like a profoundly immature external examiner. Any seasoned person here is aware of this effect.
In my view the "logical sciences" all aim for triviality: the hardest thing is to specify the problem in such a way that the solution is trivial. All the work is done, precisely, so that it is trivial.
I have a similar experience in industry (software development) that I relive weekly, but I never learn.
When I help someone solve a problem, I like to show them how I got to the solution. In the end, my hour invested was "we did it together" and not appreciated by anyone, least of all the manager. But on the few occasions that I just bang out a function quickly for somebody because I don't have the time, I'm hailed as a lifesaver critical to the business and my name is mentioned in the Daily Standup the next day.
Explaining how we work trivializes our work. It's better to remain mysterious.
Mr. Jabez Wilson laughed heavily. “Well, I never!” said he. “I thought at first that you had done something clever, but I see that there was nothing in it after all.”
“I begin to think, Watson,” said Holmes, “that I make a mistake in explaining. ‘Omne ignotum pro magnifico,’ you know, and my poor little reputation, such as it is, will suffer shipwreck if I am so candid.
- "The Red Headed League", (The Adventures of Sherlock Holmes)
Reminds me of a favorite compliment about some of my code: a friend told me he didn't realize the job it did was tricky, after reading it, until later he implemented the same thing.
I had a manager take my 500 lines and turn in into 1500 lines. He said it looked too-simple and that it needs to be more complex to be accepted by the other teams.
Sometimes the result of thousands of hours of work and thinking end up being summed up in a few sentences. Newton's 3 laws are a perfect example.
Carmack recently said that all the key ideas to create AGI will probably fit in a couple pages, with a basic implementation only being only a few thousand lines of code. I'm very much inclined to believe him.
Once you know how the trick is done, of course it'll seem obvious. But if it was so obvious why didn't you come up with it yourself?
Some people feel the need to write incredibly long texts to explain what they're doing, because it makes the topic seem more complicated.
Common in industry as well. I work in a company at the top of its field. A senior engineer once told me that the advice he got while rehearsing for a presentation:
"Your slides explain things so well it makes it easy to understand your work. No one will be impressed. The more people have to think to understand your slides, the more impressed they will be. It's OK if most people don't understand much of your presentation. "
Specific advice included: Pack your slides with content (charts, etc). The audience should feel overwhelmed.
From what I've experienced, the advice is sadly spot on.
Generally true, but there are exceptions. For example ...
There was an occasion when we were making a presentation in the early stages of bidding for a contract (complicated industry, don't ask unless you want to know). The business guy had completed his presentation about contracts we'd delivered and systems we could provide, and then it was my turn.
I turned off the projector, retracted the screen, drew a few of blobs on the whiteboard, labelled them "Remote, remote, Command and Control", then turned to the assembled personnel and said: "What would you like to know?"
My intent was to explain clearly what we did in a way they could understand, because I knew that if they understood our capability, they'd buy our kit.
They did, and they did.
So for the right audience, explaining clearly is the right thing to do.
Oh now I'm actually curious, I have a weak spot for explanations that make surprising results seem inevitable.
Much better than articles that make stuff sound fancy to make it more convincing (e.g. I've seen articles that used $r^n \pi^(n/2) / \Gamma(n/2 + 1)$ instead of the more sensible 'volume of an n-sphere').
There's a balance to achieve between presentation of a NEW RESULT vs simple obfuscation. Achieving that balance depends in understanding your audience and adapting your style appropriately.
The "understanding your audience" bit is repeated again and again and again, but seems never to be heard or heeded.
To be fair, most PhDs in Pure Maths don't contain earth-shattering, field altering results. Mostly they are small things that are genuinely new, but not very substantial. The author then has to make it clear which wide they fall in the tension between "trivial observation" and "genuine progress". I didn't do that the first time.
Haha. This reminds me how I was failed an exam for electronic circuits in university. The teacher was giving us some brief formulas without going over why the formulas made sense. When it came to solving the problems in the exam, I used the unpacked formula step by step (that's how I studied it to prep up) and came to a good result. He failed me and told before everyone that one student (me) detailed the solution in too many steps, and you don't do that. Funny thing is that the colleague behind me, who copied what he could see from my paper, passed the exam. He didn't copy all the details :))
Reminds me of a strange effect of optimizing a graph traversal. Early demoes without perf took time and lots of logs. People thought it was hard. With improvements it was instantaneous. People thought my code did nothing.
Alas, I have reached exactly the same conclusion over the years.
You need to strike the right balance between not obfuscating too much such that the reader ragequits in frustration, versus obfuscating things just enough so that the reader has to put a bit of "work" before the insight hits home, lest they fail to appreciate how much effort getting to that insight took in the first place, and dismiss it for a trivial one.
Do you feel like papers should have separate derivations & proofs of correctness? The second one requiring correctness (intuition being secondary), and the first one being intuitive (rigor being secondary).
I never implied that the obfuscation comes from having to present the maths. Clarity and rigour are orthogonal concepts. In fact, the reverse is often the case: make the derivations comprehensive and easy enough to follow, and the reader will feel it was an obvious result.
The argument here is that occasionally you should jump from A to C, skipping B (in the paper, not in your own work!), such that the reader will have to work that extra step to reach C from A, to make them appreciate more that the insight "A leads to C" was not a trivial one (and thus "not novel").
This human tendency towards obfuscation is encouraged by the need to survive in a capitalist market. If others dont see value in what you do, why should they share the fruits of their own labor? It has resulted in many Rube Goldberg constructs in our world- the health care system, the legal system, the military-industrial complex, office bureaucracy, etc. Silicon valley has its own version and even labeled it - technical debt.
I cant wait for AGI to show us how meaningless we all are. Perhaps when a machine can outvalue all human endeavor, we will be forced to recognize a greater truth and invent a new mode for valuing our existence. Something that values a sense of of goodwill and willingness to help our neighbor. Because his plight might be our own.
The problem was not that you explained your reasoning but that you did so inappropriately for your intended readers. This is explained clearly in Lawrence McEnerney's Leadership Lab courses: https://www.youtube.com/results?search_query=Lawrence+McEner...
Pure Maths, specifically Combinatorics and Graph Theory.
The first part was a generalisation of the Chromatic Function on graphs to provide information of the "Shapes" of colourings, and not just the number.
The second part showed that a plausible (hand-wave hand-wave) location for a counter-example to an open conjecture about partially-ordered sets did not, in fact, hold a counter-example. In other words, I proved a special case of an open conjecture on posets.
Is the first part similar to the "W function" from Tutte's A ring in graph theory (1947)? If I remember correctly, it generalizes the chromatic/flow/Tutte polynomial to yield a linear combination of disjoint unions of bouquets (i.e., a linear combination of ordered lists of natural numbers, describing the dimensions of the cycle spaces of each connected component in the state sum expansion of the chromatic/flow/Tutte polynomial).
More abstract than that. It generated an element from an infinitely generated ring. If there was a colouring with colour sets of size x, y, and z, then it included the element P_{x-1}.P_{y-1}.P_{z-1} from the ring. Then it summed over all colourings.
So if there is a colouring of $G$ using $n$ colours with induced sets of size 4, 4, 2, and 1, then the function evaluated at $n$ included P_3^2 P_1 P_0.
(The "-1" is a technical thing that only feels obvious once you've played with it enough).
You end up with what the values should be, but the clever part was using "umbral evaluation" on the polynomial, and not just simple substitution. It might be possible to find the details on the web, but this was 35 years ago.
This reminds me somewhat of Stanley's symmetric chromatic function, which I guess came about ten years later (1995). Actually, what you're describing seems to be something I rediscovered a couple years ago, where P_k is interpreted as the kth power sum symmetric polynomial in n variables.
I'd be interested in learning more about your invariant. Do you have a copy of your thesis anywhere?
I might have a soft copy of the thesis, but I can only guarantee hard copy. If you're interested then the best thing to do is email me so it goes into my work queue, then I can have a quick hunt and see what I can produce easily.
I totally believe it should be one of the available way to discover maths. By finding or asking oneself why we studied a problem and how we arrived at a solution; either historically or by filling in the blanks.
It just happen that I was sharing minutes ago, the start of my very weak personal attempt at building a math "tech tree" out of that idea ( Someone suggested me the name ).
Starting with euclidian geometry.
here is the link: http://mathuvue.pythonanywhere.com/ (I apologize in advance , I am not a native english speaker, and it's only the kernel of what I really want to do)
Everything around mathematics is a lonely endeavor. I would love to get in touch with anyone that might be interested in anyway.
I want to write a book on the history of science, where we actually go througy what people thought (luminiferous ether, phlogiston, the humours) until some experiments convinced them otherwise (michelson morley etc). Who knew what (Arabs and algebra), and how did it spread (Jusuits)? I think it is really needed but I haven’t found any approach like this ! It is all about the shrinkwrapped final results and modern beliefs. And often the modern beliefs are being updated too (such as theory of atoms, big bang, plate tectonics etc.)
Where can I get this info? Anyone know of anything close to this?
finally a quanta article in which the math is easy to understand
The binomial series is just algebra. You take something like (a+b)^3 and then you generalize the rule for exponents to any value and , presto done. My question is, when did math become so much harder? I think the quntic formula was when math finally became what It is today. This required a whole new conception of math . How do you go from a simple formula a high school student can grasp, which made Newton a genius because he co-discovered it, to 60+ pages of dense notion at phd level?
This was posted at least five times, without much action, until the full title was used. It turns out that the Quanta editor was good at picking titles.
Well this flies in the face of everyone who thinks the biggest stumbling block to learning mathematics is the “obscure notation”. There’s nothing obscure in this discovery, but there is genuine invention involved. Something more than can be taught by rote, or using different notation.
There are also several somewhat more worked out versions of the article and one of them shows how he arrived at an infinite series for sine. The cleverness here was just a small part of a lot more.
If I recall correctly HN automatically removes words like 'How' from the title (I think it also removes 'Why', not sure if it removes all interrogative words)