The old argument for the original cray architecture

Previous Topic Next Topic
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

The old argument for the original cray architecture

Balder Oddson

Everyone know mosti common classical architectures.

The ideal solution to use many chips to make one beefy, and in a lack of
a better word due to the difference, a data core and something that can
be referred to as super or fomula1.

Supporting two configurations, circular or horizontal pie segments.
Each segment of a circle has DDR on the digital clock of the "beef".
Perhaps ideally super conducting to increase available space and speed.
As the first any segment need to ask electrically, is if they are
deadbeef or feedbeef, do I have the unix console, or does another one
hav it? Am I single data rate and feed beef, or double data rate dead

If you have sync on double data rate, you are deadbeef, if not feedbeef.
You have a local clock that is always good, then you have this internal
structure where speed is more important as its the global clock that
should ideally match the local clock in speed. By more modern standards,
there would be something better than direct wires between segments.

A virtual cray architecture can be done with SR-IOV and MR-IOV to handle
device addresses, and likewise with IOMMU and hardware virtualization.
To achieve ideal properties around electrical and physical properties by
creating this hardware mapping using aarch64 EL 3, and
treat the processor as a classical Cray scalar-vector machine.

Whether you connect each segment to memory or a data link shouldn't
matter for the architecture itself, and gather-scatter and
scatter-gather doesn't give you an ideal ethernet switch, but it can
probably act as a hub for such a protocol as well.

I think this is an ideal general purpose architecture something like BSD
was meant to run on, or striving towards.

For IT security and performance, feed beef was the right answer for
decades if you could get a Cray.

Vectorizing pF towards scalar-vector operations as a more viable option
where security and performance both matter, given inherent qualities of
a real cray architecture that is bad at doing one thing at a time very
few times. Maybe something that looks like a super computer will be
built again. Can a moster be built to handle the largest internet cable
in the world?

Balder Oddson