Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can one explain schemes to biologists? (2014) (brown.edu)
29 points by octatoan on June 6, 2015 | hide | past | favorite | 28 comments


There seems to be a fair amount of reservation or even animosity towards maths in many areas:

* among the general public people feel it is acceptable to admit lack of basic skill or understanding in mathematics (at times even arithmetics) whereas they would be ashamed to admit it with respect to reading or writing,

* popular science books are often written without a single formula (and according to a preface in one popular science book editors and publishers generally reject a book that tries to sneak some in, their apparent justification being that each formula halves the sales),

* and it turns out that even STEM scientists, in this case biologists, appear to have gaps in their understanding of basic mathematics like analysis and algebra.

I'm very curious as to what might be behind this isolation of mathematics. Bad teachers? Mathematicians' own disregard for applications? Some sort of self-perpetuating myth about inaccessibility of maths for the common person?


As a mathematician, as someone who teaches K-14 math to math-phobic adults in a semester, and as a staff member at a Sudbury school, my biased, anecdotal answer is that mathematical education is designed to tell most students that they can't do mathematics. They take that to heart and wear this mark of shame both proudly and deeply troubled by it. It is quite a tragedy.

When I teach courses that I have no control over, I see over and over again tricky questions that have no relevance to actual mathematical understanding or use. At the same time, the majesty, wonder, and beauty of mathematics is expressly avoided as being "too much for students to handle".

In essence, we treat students as if they are computers learning rules to apply. But mathematics is about artistry and creativity. Being cold and harsh, being factory like, deprives one of the part of the brain necessary to do real mathematics.

Adults routinely tell me "I did well in mathematics until...and then I stopped understanding. I stopped enjoying, I struggled with my grades, and I can't do math."

Bottom line, people get labeled as bad at mathematics and they accept it. And it is only getting worse with common core tests.

We might also even see future generation being bad at reading, etc. Already I hear first graders saying "I'm not a reader because I do poorly on the reading tests."

It is sad because mathematics is one of the great gifts to humanity. It is breathtaking when used and done properly.


It is also shocking how mathematicians have huge gaps in their basic knowledge of cell biology. What can be done to help mathematicians learn relevant applications? Or even accept that applications merit more than an occasional momentary thought, and that only to wave off a field as trivial?


Mathematics and biology are not interchangeable like that. One is specific knowledge, other (at least the parts we're talking about here) is mental tools and models, which are useful for every field.


That would only be concerning if those mathematicians were working in cell biology or closely related fields.

Innumeracy is a very similar problem to illiteracy, it has a broad impact on your ability to learn and reason about the world - but it is much more socially accepted and (hence?) much more prevalent.


Lior Pachter had a blog post about this incident (The two cultures of mathematics and biology): https://liorpachter.wordpress.com/2014/12/30/the-two-culture...

It was previously discussed here: https://news.ycombinator.com/item?id=8819811


Mathematics has a massive problem with communication. There are communities among mathematicians which exclude each other and these communities are invisible to the outsider, so the field appears to be full of contradictions. For example, ZFC is fine for calculus (most of engineering), but completely inadequate for computer science. For CS you have to allow self-reference and that is expressly verboten in ZFC. By extension, CS only needs the natural numbers, forgoing Hilbert space for Banach space + process.

Hilbert said "Wir müssen wissen — wir werden wissen!", and just the day prior Gödel demonstrated the futility of axiomatic set theory. Yet, still today, ZFC set theory is assumed unless it is stated otherwise.

There are Luddites among mathematcians, and Turing died for their sins.


> For example, ZFC is fine for calculus (most of engineering), but completely inadequate for computer science. For CS you have to allow self-reference and that is expressly verboten in ZFC.

I assume you are referring to the axiom of foundation which implies, for example, that you cannot find three sets A, B and C such that A ∈ B, B ∈ C, C ∈ A. This is not an obstacle to talking about self-reference, recursion or cyclic graphs, etc. The axiom of foundation keeps the membership relation from having cycles, but it doesn't keep you from talking about other relations that might have cycles! Indeed, defining functions recursively is completely standard practice in mathematics, and is formalizable in ZFC with no problem. Also, the most common definition of graph is explicitly in terms of set (a graph is an ordered pair (V,E) where E is a set whose elements are pairs of elements of V)and easily formalizable in ZFC. That you can't have ∈-cycles never comes up in, say, the study of cycles in graphs or of recursively defined functions.


Are you saying that some but not all cycles are kosher with ZFC?


The axiom of foundation forbids cycles in the membership relation, you are of course allowed to talk about other relations that do have cycles. For example, R={(1,2),(2,3),(3,1)} is a relation defined in ZFC that relates 1 to 2, 2 to 3 and 3 to 1. If you want to study cycles in directed graphs it never matters that you can't have membership cycles and I'm confused about why anyone would think it does.


Thanks for the clarification. This is what set me off: https://en.wikipedia.org/wiki/Finite-state_machine Along with discussing the topic in ##math on Freenode.


I believe they are pointing out that the constraints on the construction of ZFC are not meaningfully/practically constraining the objects build atop the system that you would want to study and manipulate in CS.

I can't see how your implication works either, can you make it explicit?


Those tricky relationships arise from the most fundamental circuits that define CS, yes.

What I am trying to say is that you do not need mathematical formulas to explain mathematical concepts, but certain vocal academics have managed to push a view that math is nothing but formulas AND that there is no controversy here. There is, in fact, a lot of controversy and this does lead back to Grothendieck since his work does not support the formulas-only view.


> but certain vocal academics have managed to push a view that math is nothing but formulas AND that there is no controversy here.

This isn't really a thing, which may explain why I'm having trouble understanding what you are trying to say. To be more concrete: from what I can understand of your somewhat incoherent characterization it does not match any mathematical community I know of - it may exist, but if so it is not mainstream.


In the context of set theory, by "self-reference" I would assume you mean either 1) impredicative definitions, i.e., definitions where the bound variables used in the definition are permitted to range over the whole universe to which the very thing being defined belongs, or 2) non-well-founded sets, sets that can have themselves as members or be defined in terms of infinitely descending membership chains like ...ϵ n4 ϵ n3 ϵ n2 ϵ n.

ZFC admits the first by not limiting the range of the bound variables of the axiom schemas of Separation or Replacement (unlike the Separation axiom schema of Bounded Zermelo Set Theory), but it disallows the second (due, for example, to Foundation).

So at least one form of self-reference, impredicative definitions, are permitted by ZFC.


By self-reference I mean recursion.

ED: Thanks for the reference. It appears to be a much more accurate account of the rift which I tried to talk about. I have previously been informed that my conclusion that ZFC does not permit recursion, as in a cyclic graph, was correct, and this certainly is further evidence regarding this issue.


Most communities don't "exclude" each other, they've simply become sufficiently deep that it can take years to get into them. Your example doesn't even make sense. Different applied models require different math, so what?

I don't think the issue is mathematics communication. The math community is pretty obsessed with spreading knowledge, and some fields (e.g. econ) make fantastic use of math as a communications tool. The issue is rather that a lot of folks simply refuse to learn it and complain when others use it.


Enthusiasm doesn't imply skill. - I love discussing natural philosophy, I believe I'm good at explaining these ideas, but I have no shortage of evidence that I am in fact very bad at explaining things. I am probably not special.

Either way, I can speak in a strict fashion which sort of jives with formal logic. I referenced Hilbert's program and the assumption that mathematics has a foundation: https://en.wikipedia.org/wiki/Foundations_of_mathematics#Fou...

I make a leap of faith here; that humans can not abandon the need to find meaning. From this I surmise that there does exists a deep rift in the mathematical community. - Both Formalism and Intuitionism remain today, and if an understanding has been reached between these schools then I must have become impervious of the fact...


I've got a terminal degree in math and have been working in industry doing math for 15 years now as a consumer of mathematical research in a huge variety of fields and I have never once needed or wanted to care about foundational issues.


I think there are definitely interesting things to talk about in foundations, and also that many are now well motivated by computer science. It's not like these are either unknown or special, though. For most "practical", pure math arguing about foundations can't move your field forward as it is not holding your field back.

Further, differences in communication rarely stem from foundational issues as much as merely the social construct that practitioners in separate fields have less to say to one another than those in the same field. This leads to semantic drift and development of independent metaphor technologies. Translation necessarily becomes more expensive and so higher ROI is required in order to motivate it.


I think I can express my opinion simply now. Thank you. -

"Practical" pure math appears to be based on formalism rather than intuitionism. Because foundations and therefore intuition does not hold formalism back, these practitioners drift further away from being able to communicate their metaphors to whose who rely on intuition; such as biologists.

- And this is my opinion.


Are you uniting "intuitionism" and common "intuition"? I'm not personally sure I disagree here at all, but I'm certain many would!


Strictly no... Then We would have to debate what common intuition is and that is untenable.

What I am trying to say is that math is a discipline of philosophy, that the integers are divine, and that the straightedge and compass ought to be enough for anybody. ;) I often rely on my intuition for abstract thought, and repeatedly find that others have reached the same conclusions that I do. Therefore (my own) intuition is repeatable, and therefore intuition is scientific.

Juxtapose formalism, which seems to say that math is solely the practice of inventing rules and following them to the end. - I understand why this idea has its own beauty, haughty as it might seem, but at the same time it is a very strange and in my opinion frightening kind of beauty because it intends to remain unknowable.

Turing, who I have apparently proclaimed a saint, seems to observe that both of these paradigms are needed for some kind of universal, and altogether anthropomorphic, program. I would be hard pressed to dismiss any idea which appears beautiful...


> Their editor wrote me that 'higher degree polynomials', 'infinitesimal vectors' and 'complex space' (even complex numbers) were things at least half their readership had never come across.

It appears to me that in software engineering the situation is a lot better. Anecdotes from my own recent experiences:

- polynomials (encountered when reading about CRC codes and elliptic curve cryptography),

- infinitesimals and vectors (encountered when recalling Newton's third law when working on a small Android game using libGDX and box2d),

- complex numbers (encountered when learning Go and reading about AC circuits).

I haven't encountered complex space, but my guess is that it's similar to the familiar real vector space but permits complex numbers as vector components and coefficients in linear combinations.


By the way, you would probably also quickly encouter the schemes mentioned in the article if you study elliptic curve cryptography :)


Cool thing about software, at least when talking about coding up stuff for fun and not programming career, is that it tends to expose you to a great amount of different fields of knowledge.


Bingo. I think the biggest problem CS students have is being under the impression they're studying a career directly instead of learning the tools they need to tackle unknown problems directly.


The version submitted in the comments is better than Mumford's IMO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: