Discussion:
Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints
(too old to reply)
186282@ud0s4.net
2024-10-13 02:54:07 UTC
Permalink
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html

A team of engineers at AI inference technology company
BitEnergy AI reports a method to reduce the energy needs
of AI applications by 95%. The group has published a
paper describing their new technique on the arXiv preprint
server.

As AI applications have gone mainstream, their use has
risen dramatically, leading to a notable rise in energy
needs and costs. LLMs such as ChatGPT require a lot of
computing power, which in turn means a lot of electricity
is needed to run them.

As just one example, ChatGPT now requires roughly 564 MWh
daily, or enough to power 18,000 American homes. As the
science continues to advance and such apps become more
popular, critics have suggested that AI applications might
be using around 100 TWh annually in just a few years, on
par with Bitcoin mining operations.

In this new effort, the team at BitEnergy AI claims that
they have found a way to dramatically reduce the amount
of computing required to run AI apps that does not result
in reduced performance.

The new technique is basic—instead of using complex
floating-point multiplication (FPM), the method uses integer
addition. Apps use FPM to handle extremely large or small
numbers, allowing applications to carry out calculations
using them with extreme precision. It is also the most
energy-intensive part of AI number crunching.

. . .

The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.

But, floating-point operations use a huge amount of
CPU/NPU power.

Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....

I did one or two apps long back using a sort of "fuzzy
logic". All the books had examples showing the use of
floating-point for dealing with the 'fuzzy' values.
However I quickly figured out that 32-bit ints offered
more than enough resolution and were very quick - esp
on micro-controllers.
Richard Kettlewell
2024-10-13 09:15:54 UTC
Permalink
Post by ***@ud0s4.net
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
Post by ***@ud0s4.net
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
But, floating-point operations use a huge amount of
CPU/NPU power.
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
--
https://www.greenend.org.uk/rjk/
The Natural Philosopher
2024-10-13 12:25:26 UTC
Permalink
Post by Richard Kettlewell
Post by ***@ud0s4.net
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
Post by ***@ud0s4.net
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
But, floating-point operations use a huge amount of
CPU/NPU power.
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
Last I heard they were going to use D to As feeding analog multipliers.
And convert back to D afterwards. for a speed/ precision tradeoff.
--
There is nothing a fleet of dispatchable nuclear power plants cannot do
that cannot be done worse and more expensively and with higher carbon
emissions and more adverse environmental impact by adding intermittent
renewable energy.
Pancho
2024-10-13 13:23:25 UTC
Permalink
Post by The Natural Philosopher
Post by Richard Kettlewell
Post by ***@ud0s4.net
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
Post by ***@ud0s4.net
   The default use of floating-point really took off when
   'neural networks' became popular in the 80s. Seemed the
   ideal way to keep track of all the various weightings
   and values.
   But, floating-point operations use a huge amount of
   CPU/NPU power.
   Seems somebody finally realized that the 'extra resolution'
   of floating-point was rarely necessary and you can just
   use large integers instead. Integer math is FAST and uses
   LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
Last I heard they were going to use D to As feeding analog multipliers.
And convert back to D afterwards. for a speed/ precision tradeoff.
That sounds like the 1960s. I guess this idea does sound like a slide rule.
The Natural Philosopher
2024-10-14 10:16:48 UTC
Permalink
Post by Pancho
Post by The Natural Philosopher
Post by Richard Kettlewell
Post by ***@ud0s4.net
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
Post by ***@ud0s4.net
   The default use of floating-point really took off when
   'neural networks' became popular in the 80s. Seemed the
   ideal way to keep track of all the various weightings
   and values.
   But, floating-point operations use a huge amount of
   CPU/NPU power.
   Seems somebody finally realized that the 'extra resolution'
   of floating-point was rarely necessary and you can just
   use large integers instead. Integer math is FAST and uses
   LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
Last I heard they were going to use D to As feeding analog
multipliers. And convert back to D afterwards. for a speed/ precision
tradeoff.
That sounds like the 1960s. I guess this idea does sound like a slide rule.
No, apparently its a new (sic!) idea.

I think that even if it does not work successfully it is great that
people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
--
There’s a mighty big difference between good, sound reasons and reasons
that sound good.

Burton Hillis (William Vaughn, American columnist)
Computer Nerd Kev
2024-10-14 21:10:32 UTC
Permalink
Post by The Natural Philosopher
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
If they have a solution for the typical problem of noise in the
analogue signals drowning out the "complex" simulations. Optical
methods are interesting.
--
__ __
#_ < |\| |< _#
The Natural Philosopher
2024-10-14 22:47:11 UTC
Permalink
Post by Computer Nerd Kev
Post by The Natural Philosopher
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
If they have a solution for the typical problem of noise in the
analogue signals drowning out the "complex" simulations. Optical
methods are interesting.
If they don't, then that is in itself a valuable indication that it tow
runs give different results they are modelling a chaotic system.
It doesn't matter how much precision you put on junk data, its still junk.
--
"It is an established fact to 97% confidence limits that left wing
conspirators see right wing conspiracies everywhere"
186282@ud0s4.net
2024-10-15 06:31:58 UTC
Permalink
Post by The Natural Philosopher
Post by Pancho
Post by The Natural Philosopher
Post by Richard Kettlewell
Post by ***@ud0s4.net
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
Post by ***@ud0s4.net
   The default use of floating-point really took off when
   'neural networks' became popular in the 80s. Seemed the
   ideal way to keep track of all the various weightings
   and values.
   But, floating-point operations use a huge amount of
   CPU/NPU power.
   Seems somebody finally realized that the 'extra resolution'
   of floating-point was rarely necessary and you can just
   use large integers instead. Integer math is FAST and uses
   LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
Last I heard they were going to use D to As feeding analog
multipliers. And convert back to D afterwards. for a speed/ precision
tradeoff.
That sounds like the 1960s. I guess this idea does sound like a slide rule.
No, apparently its a new (sic!) idea.
I think that even if it does not work successfully it is great that
people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
Yea, but not much PRECISION beyond a stage or two
of calx :-)

No "perfect" fixes.
The Natural Philosopher
2024-10-15 11:03:32 UTC
Permalink
Post by The Natural Philosopher
I think that even if it does not work successfully it is great that
people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
  Yea, but not much PRECISION beyond a stage or two
  of calx  :-)
  No "perfect" fixes.
As I said, let's say we are simulating airflow over a fast moving
object - now normally the fluid dynamics CFM is crap and it is cheaper
and more accurate to throw it in a wind tunnel.

The wind tunnel is not measuiring data to any high accuracy but its
using atomic level measurement cells in enormous quantities in parallel.

The problem with CFM is you cant have too may 'cells' or you run out of
computer power. Its a step beyond 3D modelling where the more triangles
you have the closer to real everything looks, but its a similar problem .

But a wind tunnel built out of analogue 'cells' might be quite simple in
concept. Just large in silicon scale.

And it wouldn't need to be 'programmed' as its internal logic would be
constructed to be the equations that govern fluid dynamics. All you
would then do is take a 3D surface and constrain every cell in that
computer on that surface to have zero output.

If I were a graduate again that's a PhD project that would appeal...
--
There is nothing a fleet of dispatchable nuclear power plants cannot do
that cannot be done worse and more expensively and with higher carbon
emissions and more adverse environmental impact by adding intermittent
renewable energy.
186282@ud0s4.net
2024-10-16 06:54:04 UTC
Permalink
Post by The Natural Philosopher
I think that even if it does not work successfully it is great that
people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
   Yea, but not much PRECISION beyond a stage or two
   of calx  :-)
   No "perfect" fixes.
As I said, let's say we are simulating airflow over  a fast moving
object - now normally the fluid dynamics CFM is crap and it is cheaper
and more accurate to throw it in a wind tunnel.
Very likely ... though I've never thrown anything into
a wind tunnel.

Analog still has a place. Until you go atomic it really
is a very analog universe.

In theory you can do "digitized analog" ... signal
levels that seem/act analog but are really finely
discrete digital values. This CAN minimize the
chain-calc accuracy problem.
The wind tunnel is not measuiring data to any high accuracy but its
using atomic level measurement cells in enormous quantities in parallel.
The problem with CFM is you cant have too may 'cells' or you run out of
computer power. Its a step beyond 3D modelling where the more triangles
you have the closer to real everything looks, but its a similar problem .
But a wind tunnel built out of analogue 'cells' might be quite simple in
concept. Just large in silicon scale.
And it wouldn't need to be 'programmed' as its internal logic would be
constructed to be the equations that govern fluid dynamics. All you
would then do is take a 3D surface and constrain every cell in that
computer on that surface to have zero output.
If I were a graduate again that's a PhD project that would appeal...
I've seen old analog computers - mostly aimed at finding
spring rates and such. Rs, caps, inductors ... you can
sim a somewhat complex mechanical system just by plugging
in modules. Real-time and adequately accurate. You can
fake it in digital now however ... but it's not as
beautiful/natural.
186282@ud0s4.net
2024-10-15 06:30:12 UTC
Permalink
Post by Richard Kettlewell
Post by ***@ud0s4.net
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
Post by ***@ud0s4.net
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
But, floating-point operations use a huge amount of
CPU/NPU power.
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
They need to take it further - integers instead
of ANY floating-point absolutely anywhere possible.

The greenies have begun to freak over the sheer electric
power required by "AI" systems. It IS rather a lot. It's
getting worse than even bitcoin mining now. Judging by
the article, a large percentage of that energy is going
into un-needed floating-point calx.
Mike Scott
2024-10-15 07:52:03 UTC
Permalink
Post by Richard Kettlewell
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
  They need to take it further - integers instead
  of ANY floating-point absolutely anywhere possible.
Reminds me of PDP8 days.

We were doing fft's by the million. All done in 12-bit integer
arithmetic with a block exponent. Lookup tables for logs were simple
enough, as were trig functions. Not that anything was exactly "fast" --
IIRC a 1.2usec basic instruction cycle.

The machine did have a FP unit, but it was too s..l..o..w.. by far for this.

The circle goes around.
--
Mike Scott
Harlow, England
Richard Kettlewell
2024-10-15 08:14:40 UTC
Permalink
Post by ***@ud0s4.net
Post by Richard Kettlewell
Post by ***@ud0s4.net
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
[...]
Post by ***@ud0s4.net
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
But, floating-point operations use a huge amount of
CPU/NPU power.
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
They need to take it further - integers instead
of ANY floating-point absolutely anywhere possible.
Perhaps you could publish your alternative algorithm that satisfies
their use case.
--
https://www.greenend.org.uk/rjk/
Pancho
2024-10-13 10:45:46 UTC
Permalink
Post by ***@ud0s4.net
The new technique is basic—instead of using complex
floating-point multiplication (FPM), the method uses integer
addition. Apps use FPM to handle extremely large or small
numbers, allowing applications to carry out calculations
using them with extreme precision. It is also the most
energy-intensive part of AI number crunching.
That isn't really true. Floats can handle big and small, but the reason
people use them is for simplicity.

The problem is that typical integer calculations are not closed, the
result is not an integer. Addition is fine, but the result of division
is typically not an integer. So if you use integers to model a problem
every time you do a division (or exp, log, sin, etc) you need to make a
decision about how to force the result into an integer.

Floats actually use integral values for exponent and mantissa, but they
automatically make ballpark reasonable decisions about how to force the
results into integral values for mantissa and exponent, meaning
operations are effectively closed (ignoring exceptions). So the
programmer doesn't have to worry, so much.

Floating point ops are actually quite efficient, much less of a concern
than something like a branch misprediction. A 20x speed up (energy
saving) sounds close to a theoretical maximum. I would be surprised if
it can be achieved in anything but a few cases.
186282@ud0s4.net
2024-10-15 06:43:08 UTC
Permalink
Post by Pancho
Post by ***@ud0s4.net
The new technique is basic—instead of using complex
floating-point multiplication (FPM), the method uses integer
addition. Apps use FPM to handle extremely large or small
numbers, allowing applications to carry out calculations
using them with extreme precision. It is also the most
energy-intensive part of AI number crunching.
That isn't really true. Floats can handle big and small, but the reason
people use them is for simplicity.
"Simple", usually. Energy/time-efficient ... not so much.
Post by Pancho
The problem is that typical integer calculations are not closed, the
result is not an integer. Addition is fine, but the result of division
is typically not an integer. So if you use integers to model a problem
every time you do a division (or exp, log, sin, etc) you need to make a
decision about how to force the result into an integer.
The question is how EXACT the precision HAS to be for
most "AI" uses. Might be safe to throw away a few
decimal points at the bottom.
Post by Pancho
Floats actually use integral values for exponent and mantissa, but they
automatically make ballpark reasonable decisions about how to force the
results into integral values for mantissa and exponent, meaning
operations are effectively closed (ignoring exceptions).  So the
programmer doesn't have to worry, so much.
Floating point ops are actually quite efficient, much less of a concern
than something like a branch misprediction. A 20x speed up (energy
saving) sounds close to a theoretical maximum. I would be surprised if
it can be achieved in anything but a few cases.
Well ... the article insists they are NOT energy-efficient,
esp when performed en-masse. I think their prelim tests
suggested an almost 95% savings (sometimes).

Anyway, at least the IDEA is back out there again. We
old guys, oft dealing with microcontrollers, knew the
advantages of wider integers over even 'small' FP.

Math processors disguised the amount of processing
required for FP ... but it was STILL there.
The Natural Philosopher
2024-10-15 11:06:30 UTC
Permalink
Post by ***@ud0s4.net
The question is how EXACT the precision HAS to be for
  most "AI" uses. Might be safe to throw away a few
  decimal points at the bottom.
My thesis is that *in some applications*, more low quality calculations
bets a fewer high quality ones anyway.
I wasn't thinkingof AI, as much as modelling complex turbulent flow in
aero and hydrodynamics or weather forecasting
--
Outside of a dog, a book is a man's best friend. Inside of a dog it's
too dark to read.

Groucho Marx
186282@ud0s4.net
2024-10-16 06:38:08 UTC
Permalink
Post by The Natural Philosopher
Post by ***@ud0s4.net
The question is how EXACT the precision HAS to be for
   most "AI" uses. Might be safe to throw away a few
   decimal points at the bottom.
My thesis is that *in some applications*, more low quality calculations
bets a fewer high quality ones anyway.
I wasn't thinking of AI, as much as modelling complex turbulent flow in
aero and hydrodynamics or weather forecasting
Well, weather, any decimal points are BS anyway :-)

However, AI and fuzzy logic and neural networks - it
has just been standard practice to use floats to handle
all values. I've got books going back into the mid 80s
on all those and you JUST USED floats.

BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.

In any case, I'm happy SOMEONE finally realized this.

TOOK a really LONG time though ......
Pancho
2024-10-16 07:23:45 UTC
Permalink
Post by The Natural Philosopher
Post by ***@ud0s4.net
The question is how EXACT the precision HAS to be for
   most "AI" uses. Might be safe to throw away a few
   decimal points at the bottom.
My thesis is that *in some applications*, more low quality
calculations bets a fewer high quality ones anyway.
I wasn't thinking of AI, as much as modelling complex turbulent flow
in aero and hydrodynamics or weather forecasting
  Well, weather, any decimal points are BS anyway :-)
  However, AI and fuzzy logic and neural networks - it
  has just been standard practice to use floats to handle
  all values. I've got books going back into the mid 80s
  on all those and you JUST USED floats.
  BUT ... as said, even a 32-bit int can handle fairly
  large vals. Mult little vals by 100 or 1000 and you can
  throw away the need for decimal points - and the POWER
  required to do such calx. Accuracy should be more than
  adequate.
  In any case, I'm happy SOMEONE finally realized this.
  TOOK a really LONG time though ......
AIUI, GPU/Cuda only offered 32 bit floats, no doubles. So I think people
always knew.
Richard Kettlewell
2024-10-16 10:56:52 UTC
Permalink
Post by ***@ud0s4.net
BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
You’re talking about fixed-point arithmetic, which is already used where
appropriate (although the scale is a power of 2 so you can shift
products down into the right place rather than dividing).
Post by ***@ud0s4.net
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
It’s obvious that you’ve not actually read or understood the paper that
this thread is about.
--
https://www.greenend.org.uk/rjk/
186282@ud0s4.net
2024-10-18 04:20:39 UTC
Permalink
Post by Richard Kettlewell
Post by ***@ud0s4.net
BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
You’re talking about fixed-point arithmetic, which is already used where
appropriate (although the scale is a power of 2 so you can shift
products down into the right place rather than dividing).
Post by ***@ud0s4.net
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
It’s obvious that you’ve not actually read or understood the paper that
this thread is about.
Maybe I understood it better than you ... and from
4+ decades of experiences.

But, argue as you will. I'm not too proud. Many are
better than me, many more are worse.

IF this was just about the power reqs of various forms
of fixed/floating then there'd be little point in the
article. Breaking the FP tradition as much as possible
and going to (wide) ints really CAN save tons of power
and time. With current AI systems this is a BIG deal.

There was a period where I had to do some quasi-AI stuff
for micro-controllers. Crude NNs/Fuzzy mostly. Not too
sophisticated, yet the approach DID make 'em better.

Now DO check into what's needed for FP on a PIC or 8051.
It's nasty. By seeing beyond the usual examples in books
and articles - which all used FP for "convenience" -
I found the vast advantages of substituting ints instead.
Easy to FAKE a few decimal-points of precision using
ints. That's usually more than good enough.

You can splice an NPU into most any kind of processor
BUT the steps to do FP don't really change, still suck
up power. Just SEEMS trivial because it's faster.
Richard Kettlewell
2024-10-18 16:34:17 UTC
Permalink
Post by ***@ud0s4.net
Post by Richard Kettlewell
Post by ***@ud0s4.net
BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
You’re talking about fixed-point arithmetic, which is already used
where appropriate (although the scale is a power of 2 so you can
shift products down into the right place rather than dividing).
Post by ***@ud0s4.net
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
It’s obvious that you’ve not actually read or understood the paper
that this thread is about.
Maybe I understood it better than you ... and from
4+ decades of experiences.
Perhaps you could explain why you keep talking about integer arithmetic
when the paper is about floating point arithmetic, then.
--
https://www.greenend.org.uk/rjk/
186282ud0s3
2024-10-18 18:12:13 UTC
Permalink
Post by Richard Kettlewell
Post by ***@ud0s4.net
Post by Richard Kettlewell
Post by ***@ud0s4.net
BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
You’re talking about fixed-point arithmetic, which is already used
where appropriate (although the scale is a power of 2 so you can
shift products down into the right place rather than dividing).
Post by ***@ud0s4.net
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
It’s obvious that you’ve not actually read or understood the paper
that this thread is about.
Maybe I understood it better than you ... and from
4+ decades of experiences.
Perhaps you could explain why you keep talking about integer arithmetic
when the paper is about floating point arithmetic, then.
Umm ... because the idea of swapping FP for ints in
order to save lots of power was introduced ?

This issue is getting to be *poitical* now - the
ultra-greenies freaking about how much power the
'AI' computing centers require.
Chris Ahlstrom
2024-10-18 19:56:35 UTC
Permalink
Post by 186282ud0s3
<snip>
Perhaps you could explain why you keep talking about integer arithmetic
when the paper is about floating point arithmetic, then.
Umm ... because the idea of swapping FP for ints in
order to save lots of power was introduced ?
This issue is getting to be *poitical* now - the
ultra-greenies freaking about how much power the
'AI' computing centers require.
Heh, I freak out about sites I visit that make my computer rev up and
turn on the cooler: sites polluted with ads, sites that use your CPU
to mine bitcoin and who knows what else.
--
A putt that stops close enough to the cup to inspire such comments as
"you could blow it in" may be blown in. This rule does not apply if
the ball is more than three inches from the hole, because no one wants
to make a travesty of the game.
-- Donald A. Metz
The Natural Philosopher
2024-10-18 20:06:40 UTC
Permalink
Post by Chris Ahlstrom
Post by 186282ud0s3
<snip>
Perhaps you could explain why you keep talking about integer arithmetic
when the paper is about floating point arithmetic, then.
Umm ... because the idea of swapping FP for ints in
order to save lots of power was introduced ?
This issue is getting to be *poitical* now - the
ultra-greenies freaking about how much power the
'AI' computing centers require.
Heh, I freak out about sites I visit that make my computer rev up and
turn on the cooler: sites polluted with ads, sites that use your CPU
to mine bitcoin and who knows what else.
People say this, but I haven't seen hardly any ads since installing
Ublock origin.
I have a CPU and bandwidth monitor in my task bar and if it starts
looking odd, I exit the site...
--
New Socialism consists essentially in being seen to have your heart in
the right place whilst your head is in the clouds and your hand is in
someone else's pocket.
rbowman
2024-10-15 19:20:32 UTC
Permalink
The question is how EXACT the precision HAS to be for most "AI" uses.
Might be safe to throw away a few decimal points at the bottom.
It's usually referred to as 'machine learning' rather than AI but when you
look at TinyML on edge devices doing image recognition, wake word
processing, and other tasks it's impressive how much you can throw away
and still get a reasonable quality of results.

https://www.tinyml.org/

This goes back to the slide rule days. Sure, you could whip out your book
of six place tables and get seemingly more accurate results but did all
those decimal places mean anything in the real world? Computers took the
pain out of calculations but also tended to avoid the questions of 'what
does this really mean in the real world'.
Chris Ahlstrom
2024-10-15 19:46:05 UTC
Permalink
Post by rbowman
The question is how EXACT the precision HAS to be for most "AI" uses.
Might be safe to throw away a few decimal points at the bottom.
It's usually referred to as 'machine learning' rather than AI but when you
look at TinyML on edge devices doing image recognition, wake word
processing, and other tasks it's impressive how much you can throw away
and still get a reasonable quality of results.
https://www.tinyml.org/
This goes back to the slide rule days. Sure, you could whip out your book
of six place tables and get seemingly more accurate results but did all
those decimal places mean anything in the real world? Computers took the
pain out of calculations but also tended to avoid the questions of 'what
does this really mean in the real world'.
In high school chemistry, we learned how to apply uncertainty ranges (plus or
minus) to measurements and how to accumulate ranges based on multiple
measurements.

The political polls state ranges, but nothing about the alpha, the N, and,
most importantly, the wording of the poll questions and the nature of the
sampling.
--
It was the Law of the Sea, they said. Civilization ends at the waterline.
Beyond that, we all enter the food chain, and not always right at the top.
-- Hunter S. Thompson
rbowman
2024-10-16 02:35:52 UTC
Permalink
Post by Chris Ahlstrom
The political polls state ranges, but nothing about the alpha, the N, and,
most importantly, the wording of the poll questions and the nature of
the sampling.
I try to ignore polls and most of the hype. A few years back I went to bed
expecting Hillary Clinton to be the president elect when I woke up. The DJ
on the radio station I listen to morning was a definite lefty. When he
played Norah Jones' 'Carry On' I found I'd been mistaken.



"Let's just forget
Leave it behind
And carry on."
186282@ud0s4.net
2024-10-16 07:13:44 UTC
Permalink
Post by rbowman
Post by Chris Ahlstrom
The political polls state ranges, but nothing about the alpha, the N, and,
most importantly, the wording of the poll questions and the nature of
the sampling.
I try to ignore polls and most of the hype. A few years back I went to bed
expecting Hillary Clinton to be the president elect when I woke up. The DJ
on the radio station I listen to morning was a definite lefty. When he
played Norah Jones' 'Carry On' I found I'd been mistaken.
http://youtu.be/DqA25Ug71Mc
Trump IS grating ... no question ... but K is just
an empty skull. That's been her JOB. Can't have
someone like that in times like these.

Not entirely sure of the Linux angle here though ...
Chris Ahlstrom
2024-10-16 11:40:46 UTC
Permalink
Post by ***@ud0s4.net
Post by rbowman
Post by Chris Ahlstrom
The political polls state ranges, but nothing about the alpha, the N, and,
most importantly, the wording of the poll questions and the nature of
the sampling.
I try to ignore polls and most of the hype. A few years back I went to bed
expecting Hillary Clinton to be the president elect when I woke up. The DJ
on the radio station I listen to morning was a definite lefty. When he
played Norah Jones' 'Carry On' I found I'd been mistaken.
http://youtu.be/DqA25Ug71Mc
Trump IS grating ... no question ... but K is just
an empty skull. That's been her JOB. Can't have
someone like that in times like these.
Trump's the empty skull. Well, it is full... of nonsense and bile.
Post by ***@ud0s4.net
Not entirely sure of the Linux angle here though ...
Harris as VP was like Linux, working reliably in the background.

She's no empty skull. She was a prosecutor, a district attorney, a state
attorney general, a US senator, and the vice president. But some people cannot
stand that in a woman.
--
I began many years ago, as so many young men do, in searching for the
perfect woman. I believed that if I looked long enough, and hard enough,
I would find her and then I would be secure for life. Well, the years
and romances came and went, and I eventually ended up settling for someone
a lot less than my idea of perfection. But one day, after many years
together, I lay there on our bed recovering from a slight illness. My
wife was sitting on a chair next to the bed, humming softly and watching
the late afternoon sun filtering through the trees. The only sounds to
be heard elsewhere were the clock ticking, the kettle downstairs starting
to boil, and an occasional schoolchild passing beneath our window. And
as I looked up into my wife's now wrinkled face, but still warm and
twinkling eyes, I realized something about perfection... It comes only
with time.
-- James L. Collymore, "Perfect Woman"
Charlie Gibbs
2024-10-16 16:16:23 UTC
Permalink
Post by Chris Ahlstrom
Harris as VP was like Linux, working reliably in the background.
She's no empty skull. She was a prosecutor, a district attorney, a state
attorney general, a US senator, and the vice president. But some people cannot
stand that in a woman.
<applause>
Post by Chris Ahlstrom
I began many years ago, as so many young men do, in searching for the
perfect woman. I believed that if I looked long enough, and hard enough,
I would find her and then I would be secure for life. Well, the years
and romances came and went, and I eventually ended up settling for someone
a lot less than my idea of perfection. But one day, after many years
together, I lay there on our bed recovering from a slight illness. My
wife was sitting on a chair next to the bed, humming softly and watching
the late afternoon sun filtering through the trees. The only sounds to
be heard elsewhere were the clock ticking, the kettle downstairs starting
to boil, and an occasional schoolchild passing beneath our window. And
as I looked up into my wife's now wrinkled face, but still warm and
twinkling eyes, I realized something about perfection... It comes only
with time.
-- James L. Collymore, "Perfect Woman"
Beautiful. Here's Heinlein's take on it:

A man does not insist on physical beauty in a woman who
builds up his morale. After a while he realizes that
she _is_ beautiful - he just hadn't noticed it at first.
--
/~\ Charlie Gibbs | We'll go down in history as the
\ / <***@kltpzyxm.invalid> | first society that wouldn't save
X I'm really at ac.dekanfrus | itself because it wasn't cost-
/ \ if you read it the right way. | effective. -- Kurt Vonnegut
vallor
2024-10-16 16:25:42 UTC
Permalink
Post by Chris Ahlstrom
Trump IS grating ... no question ... but K is just an empty skull.
That's been her JOB. Can't have someone like that in times like
these.
Trump's the empty skull. Well, it is full... of nonsense and bile.
He carries his bowels in his head, which explains what comes
out of his mouth.
Post by Chris Ahlstrom
Not entirely sure of the Linux angle here though ...
Harris as VP was like Linux, working reliably in the background.
She's no empty skull. She was a prosecutor, a district attorney, a state
attorney general, a US senator, and the vice president. But some people
cannot stand that in a woman.
While I agree with you, it has fsck-all to do with Linux.

I built my latest kernel Monday morning, but didn't boot it until
just a few minutes ago.

$ uname -a
Linux lm 6.11.3 #1 SMP PREEMPT_DYNAMIC Mon Oct 14 06:25:38 PDT 2024 x86_64
x86_64 x86_64 GNU/Linux

Build of my kitchen-sink kernel took...

real 436.52
user 21765.03
sys 3686.62

...using RAMdisk on this system which uses an
AMD Ryzen Threadripper 3970X 32-Core Processor.
--
-v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090 Ti
OS: Linux 6.11.3 Release: Mint 21.3 Mem: 258G
"(A)bort, (R)etry, (S)elf-destruct?"
Chris Ahlstrom
2024-10-16 18:24:03 UTC
Permalink
Post by vallor
<snip>
While I agree with you, it has fsck-all to do with Linux.
You yelled at me, but not anyone else!!!! No fair! No fair!!!
Post by vallor
$ uname -a
Linux lm 6.11.3 #1 SMP PREEMPT_DYNAMIC Mon Oct 14 06:25:38 PDT 2024 x86_64
x86_64 x86_64 GNU/Linux
Build of my kitchen-sink kernel took...
real 436.52
user 21765.03
sys 3686.62
...using RAMdisk on this system which uses an
AMD Ryzen Threadripper 3970X 32-Core Processor.
--
Wandering

What is the difference between assent and denial?
What is the difference between beautiful and ugly?
What is the difference between fearsome and afraid?
The people are merry as if at a magnificent party
Or playing in the park at springtime,
But I am tranquil and wandering,
Like a newborn before it learns to smile,
Alone, with no true home.
The people have enough and to spare,
Where I have nothing,
And my heart is foolish,
Muddled and cloudy.
The people are bright and certain,
Where I am dim and confused;
The people are clever and wise,
Where I am dull and ignorant;
Aimless as a wave drifting over the sea,
Attached to nothing.
The people are busy with purpose,
Where I am impractical and rough;
I do not share the peoples' cares
But I am fed at nature's breast.
-- Lao Tse, "Tao Te Ching"
vallor
2024-10-16 18:29:18 UTC
Permalink
Post by Chris Ahlstrom
On Wed, 16 Oct 2024 07:40:46 -0400, Chris Ahlstrom
<snip>
While I agree with you, it has fsck-all to do with Linux.
You yelled at me, but not anyone else!!!! No fair! No fair!!!
I was agreeing with you. :)
Post by Chris Ahlstrom
$ uname -a Linux lm 6.11.3 #1 SMP PREEMPT_DYNAMIC Mon Oct 14 06:25:38
PDT 2024 x86_64 x86_64 x86_64 GNU/Linux
Build of my kitchen-sink kernel took...
real 436.52 user 21765.03 sys 3686.62
...using RAMdisk on this system which uses an AMD Ryzen Threadripper
3970X 32-Core Processor.
--
-v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090 Ti
OS: Linux 6.11.3 Release: Mint 21.3 Mem: 258G
"Creditors have much better memories than debtors."
The Natural Philosopher
2024-10-16 19:02:57 UTC
Permalink
Post by Chris Ahlstrom
She's no empty skull. She was a prosecutor, a district attorney, a state
attorney general, a US senator, and the vice president. But some people cannot
stand that in a woman.
I note that you omitted the adjective 'successful' from her resumé...

That pretty much describes our new prime minister.

He is all things considered, a man, and a completely incompetent cunt.

elected because the Tories seemed even worse.

In fact they are almost identical in their utter failure to address
really important issues and make a lot of noise about irrelevant tripe.
And put their snouts in the trough
--
“The fundamental cause of the trouble in the modern world today is that
the stupid are cocksure while the intelligent are full of doubt."

- Bertrand Russell
rbowman
2024-10-16 21:07:56 UTC
Permalink
Post by Chris Ahlstrom
Harris as VP was like Linux, working reliably in the background.
There you have the problem. If she was working reliably in the background
for the last three and a half years she can hardly claim to be a candidate
for change. Obama could make that work after eight years of Bush.
Chris Ahlstrom
2024-10-17 10:54:25 UTC
Permalink
Post by rbowman
Post by Chris Ahlstrom
Harris as VP was like Linux, working reliably in the background.
There you have the problem. If she was working reliably in the background
for the last three and a half years she can hardly claim to be a candidate
for change. Obama could make that work after eight years of Bush.
Whatever, dude. Incremental change is fine with me.

The big changes we really need (eliminating Citizens United, taking medical
insurers out of the system, and so much more) will never happen.

The game is rigged.

Heh heh:
--
We have only two things to worry about: That things will never get
back to normal, and that they already have.
The Natural Philosopher
2024-10-17 12:38:58 UTC
Permalink
Post by Chris Ahlstrom
Post by rbowman
Post by Chris Ahlstrom
Harris as VP was like Linux, working reliably in the background.
There you have the problem. If she was working reliably in the background
for the last three and a half years she can hardly claim to be a candidate
for change. Obama could make that work after eight years of Bush.
Whatever, dude. Incremental change is fine with me.
The big changes we really need (eliminating Citizens United, taking medical
insurers out of the system, and so much more) will never happen.
It will if you let Putin take Alaska and China have the whole west coast.
Post by Chris Ahlstrom
The game is rigged.
--
“it should be clear by now to everyone that activist environmentalism
(or environmental activism) is becoming a general ideology about humans,
about their freedom, about the relationship between the individual and
the state, and about the manipulation of people under the guise of a
'noble' idea. It is not an honest pursuit of 'sustainable development,'
a matter of elementary environmental protection, or a search for
rational mechanisms designed to achieve a healthy environment. Yet
things do occur that make you shake your head and remind yourself that
you live neither in Joseph Stalin’s Communist era, nor in the Orwellian
utopia of 1984.”

Vaclav Klaus
Loading...