When taking about limits, you can approach 0 from the positive or negative direction, which can give very different results. For example, lim cotx, x->0+ = ∞ while lim cotx, x->0- = -∞
I mean it’s an algebra, isn’t it? And it definitely was mathematicians who came up with the thing. In the same way that artists didn’t come up with the CGI colour palette.
It’s a wonderful world where 1 / 0 is ∞ and 1 / -0 is -∞, making a lot of high school teachers very very mad. OTOH it’s also a very strange world where x = y does not imply 1 / x = 1 / y. But it is, very emphatically, an algebra.
Mostly it’s pure numerology, at least from the POV of most of the people using it.
IEEE 754 is the standard to which basically all computer systems implement floating point numbers. It specifically distinguishes between +0 and -0 among other weird quirks.
You probably are familiar with the thing, just not under that name, and not as a subject of mathematical study. I am aware that there are, at least in theory, mathematicians never expanding beyond pen+paper (and that’s fine) but TBH they’re getting kinda rare. The last time you fired up Julia you probably used them, R, possibly, Coq, it’d actually be a surprise.
They’re most widely known to trip up newbie programmers, causing excessive bug hunts and then a proud bug report stating “0.1 + 0.2 /= 0.3, that’s wrong”, to which the reply will be “nope, that’s exactly as the spec says”. The solution, to people who aren’t numerologists, is to sprinkle gratuitous amounts of epsilons everywhere.
I’m aware. Algebra is what I’m most interested in, and so when someone says “0” I think “additive identity of a ring” unless context makes the use obvious.
Edit: I’ve given it some thought, and I’m not convinced all algebras can fit in a set, because every non-empty set can have at least one algebra imposed upon them, and so the set of all algebras must have cardinality no less than the proper class of all sets. We also can’t have a set of all algebras (up to isomorphism) because iirc the surreal numbers are an algebra imposed on a structure that itself incorporates a proper class, and is thus incapable of being a set element.
And, as a mathematician who has been coding a library to create scaled geometric graphics for his paper, I hate -0.0.
Seriously, I run every number where sign determines action through a function I call “fix_zero” just because tiny tiny rounding errors pile up in floats, even is numpy.
Specifically I was referring to standard float representation which permits signed zeros. However, other comments provide some interesting examples also.
Yes, mathematically it’s the same, but in physics there’s a guy named Heisenberg who denies that 0.99999… really gets to 1.
There is always this difference, for a mathematician infinite is not a problem, but for a physicist it is, plus a very big one.
In computer engineering we have positive and negative zero.
Also in Math.
Unknowingly from the GP, that’s exactly where CE got it from.
What is gp/ce?
Grand parent / computer engineering
What algebra uses negative 0?
When taking about limits, you can approach 0 from the positive or negative direction, which can give very different results. For example, lim cotx, x->0+ = ∞ while lim cotx, x->0- = -∞
Speaking as a mathematician, it’s not really accurate to call that -0.
Yes, but it is infinitesimally close.
You also can’t call something infinity. People call stuff names. It is just important that they define their terms well enough.
Why do you think that?
IEEE 754
I mean it’s an algebra, isn’t it? And it definitely was mathematicians who came up with the thing. In the same way that artists didn’t come up with the CGI colour palette.
I’m not familiar with IEEE 754.
Edit: I think this sort of space shouldn’t be the kind where people get downvoted for admitting ignorance honestly, but maybe that’s just me.
It’s a wonderful world where 1 / 0 is ∞ and 1 / -0 is -∞, making a lot of high school teachers very very mad. OTOH it’s also a very strange world where x = y does not imply 1 / x = 1 / y. But it is, very emphatically, an algebra.
Mostly it’s pure numerology, at least from the POV of most of the people using it.
I’ll need to look at it more; it sounds interesting.
IEEE 754 is the standard to which basically all computer systems implement floating point numbers. It specifically distinguishes between +0 and -0 among other weird quirks.
You probably are familiar with the thing, just not under that name, and not as a subject of mathematical study. I am aware that there are, at least in theory, mathematicians never expanding beyond pen+paper (and that’s fine) but TBH they’re getting kinda rare. The last time you fired up Julia you probably used them, R, possibly, Coq, it’d actually be a surprise.
They’re most widely known to trip up newbie programmers, causing excessive bug hunts and then a proud bug report stating “0.1 + 0.2 /= 0.3, that’s wrong”, to which the reply will be “nope, that’s exactly as the spec says”. The solution, to people who aren’t numerologists, is to sprinkle gratuitous amounts of epsilons everywhere.
Math is more than just the set of all algebras.
I’m aware. Algebra is what I’m most interested in, and so when someone says “0” I think “additive identity of a ring” unless context makes the use obvious.
Edit: I’ve given it some thought, and I’m not convinced all algebras can fit in a set, because every non-empty set can have at least one algebra imposed upon them, and so the set of all algebras must have cardinality no less than the proper class of all sets. We also can’t have a set of all algebras (up to isomorphism) because iirc the surreal numbers are an algebra imposed on a structure that itself incorporates a proper class, and is thus incapable of being a set element.
Depends, I’d say. Is your set theory incomplete or inconsistent?
And, as a mathematician who has been coding a library to create scaled geometric graphics for his paper, I hate -0.0.
Seriously, I run every number where sign determines action through a function I call “fix_zero” just because tiny tiny rounding errors pile up in floats, even is numpy.
What do you mean? In two’s complement, there is only one zero.
IEEE 754 floating point numbers have a signed bit at the front, causing +0 and -0 to exist.
Specifically I was referring to standard float representation which permits signed zeros. However, other comments provide some interesting examples also.
https://en.m.wikipedia.org/wiki/Ones'_complement
Who use ones complement?
I assume no one at this point
I think 1’s complement only existed to facilitate 2’s complement. Otherwise its stupid
floats
1- 0,99999…
Floating point numbers are not possible in two’s complement, besides that, what is your point? 0,99999999… is probably the same as 1.
Yes, mathematically it’s the same, but in physics there’s a guy named Heisenberg who denies that 0.99999… really gets to 1. There is always this difference, for a mathematician infinite is not a problem, but for a physicist it is, plus a very big one.
True, it sounds like that might be a problem if we consider that physics has to be between math and computer science.
(Have a nice day)