The clock rate is the fundamental rate in cycles per second (measured
in hertz) for the frequency of the clock in any synchronous circuit. For
example, a crystal oscillator frequency reference typically is synonymous with a
fixed sinusoidal waveform, a clock rate is that frequency reference translated
by electronic circuitry into a corresponding square wave pulse [typically] for
digital electronics applications. In this context the use of the word, speed
(physical movement), should not be confused with frequency or its corresponding
clock rate. Thus, the term "clock speed" is a misnomer.
clock cycle (typically shorter than a nanosecond in modern non-embedded
microprocessors) toggles between a logical zero and a logical one state.
Historically, the logical zero state of a clock cycle persists longer than a
logical one state due to thermal and electrical specification constraints.
CPU manufacturers typically charge premium prices for CPUs that operate at
higher clock rates. For a given CPU, the clock rates are determined at the end
of the manufacturing process through actual testing of each CPU. CPUs that are
tested as complying with a given set of standards may be labeled with a higher
clock rate, e.g., 1.50 GHz, while those that fail the standards of the higher
clock rate yet pass the standards of a lesser clock rate may be labeled with the
lesser clock rate, e.g., 1.33 GHz, and sold at a relatively lower price.
Limits to clock rate
The clock rate of a CPU is normally determined by the frequency of an
oscillator crystal. The first commercial PC, the Altair 8800 (by MITS), used an
Intel 8080 CPU with a clock rate of 2 MHz (2 million cycles/second). The
original IBM PC (c. 1981) had a clock rate of 4.77 MHz (4,772,727
cycles/second). In 1995, Intel's Pentium chip ran at 100 MHz (100 million
cycles/second), and in 2002, an Intel Pentium 4 model was introduced as the
first CPU with a clock rate of 3 GHz (three billion cycles/second corresponding
to ~3.3 10-10seconds per cycle).
With any particular CPU, replacing the crystal with another crystal that
oscillates half the frequency ("underclocking") will generally make the CPU run
at half the performance. It will also make the CPU produce roughly half as much
Some people try to increase performance of a CPU by replacing the oscillator
crystal with a higher frequency crystal ("overclocking"). However, those people
will soon hit one or another of these 2 limits on clock rate:
After each clock pulse, the signal lines inside the CPU need time to
settle to their new state. If the next clock pulse comes in too soon, while
the signals are still settling (before every signal line has finished
transitioning from 0 to 1, or from 1 to 0), the results will be incorrect.
Chip manufacturers publish a "maximum clock rate" specification, and they
test chips before selling them to make sure they meet that specification,
even when executing the most complicated instructions with the data patterns
that take the longest to settle (testing at the temperature and voltage that
runs the lowest performance).
Some energy is wasted as heat (mostly inside the driving transistors)
whenever a signal line makes a transition from the 0 to the 1 state or vice
versa. When executing complicated instructions that cause lots of
transitions, higher clock rates produce more heat. If electricity is
converted to heat faster than a particular computer cooling system can get
rid of it, then the transistors may get hot enough to be destroyed.
Engineers continue to find new ways to design CPUs that settle a little
quicker or use slightly less energy per transition, pushing back those limits,
producing new CPUs that can run at slightly higher clock rates. The ultimate
limits to energy per transition are explored in reversible computing, although
no reversible computers have yet been implemented.
People also continue to find new ways to design CPUs so that, although they
may run at the same or a lower clock rate as older CPUs, they get more
instructions completed per clock cycle. (See also Moore's law).
The clock rate of a computer is only useful for providing comparisons between
computer chips in the same processor family. An IBM PC with an Intel 80486
running at 50 MHz will
be about twice as fast as one with the same CPU, memory and display running at
25 MHz, while the same will not be true for MIPS R4000 running at the same clock
rate as the two are different processors with different functionality.
Furthermore, there are many other factors to consider when comparing the
performance of entire computers, like the clock rate of the computer's
front-side bus (FSB), the clock rate of the
RAM, the width in bits of the
CPU's bus and the amount of Level 1, Level 2 and Level 3
Clock rates should not be used when comparing different computers or
different processor families. Rather, some software benchmark should be used.
Clock rates can be very misleading since the amount of work different computer
chips can do in one cycle varies. For example, RISC CPUs tend to have simpler
instructions than CISC CPUs (but higher clock rates), and superscalar processors
can execute more than one instruction per cycle (on average), yet it is not
uncommon for them to do "less" in a clock cycle. In addition, subscalar CPUs or
use of parallelism can also affect the quality of the computer regardless of
In the early 1990s, most computer companies advertised their computers'
performance chiefly by referring to their CPUs' clock rates. This led to various
marketing games, such as Apple Computer's decision to create and market the
Power Macintosh 8100 with a clock rate of 110 MHz so that Apple could advertise
that its computer had the fastest clock rate available—the fastest Intel
processor available at the time ran at 100 MHz. This superiority in clock rate,
however, was meaningless since the PowerPC 601 and Pentium implemented different
instruction set architectures and had different microarchitectures.
After 2000, Intel's competitor, Advanced Micro Devices, started using model
numbers instead of clock rates to market its CPUs because of the lower CPU
clocks when compared to Intel. Continuing this trend it attempted to dispel the
"megahertz myth" which it claimed did not tell the whole story of the power of
its CPUs. In 2004, Intel announced it would do the same, probably because of
consumer confusion over its Pentium M mobile CPU, which reportedly ran at about
half the clock rate of the roughly equivalent Pentium 4 CPU. As of 2007,
performance improvements have continued to come through innovations in
pipelining, instruction sets, and the development of multi-core processors,
rather than clock rate increases (which have been constrained by CPU power
John Mauchly and Presper
Eckert was the first to design the computer; they are from the the University of Pennsylvania. The name of the computer
was ENIAC, which stood for Electronic Numerical Intergrator and Computer. It
filled 40 ,9 feet tall cabinets and had about 18,000 vacuum tubes with miles of
wiring this was designed for a weapon of war. It could calculate trajectories
for WWII artillery guns but because of it operating under the army secrecy it
was little known to the public only by computing circles, it was eventually
buried in history under controversy,
jealousy, and lawsuits. ENIAC, Eckert, and Mauchly were forgotten. They also
discovered the first Computer Company, but neither found fame nor fortune
because both men were poor businessmen and lousy marketers. The team that built
the first computer ate, slept, and lived with it. They ran into so many problems
from how to combine vacuum tubes together so they could count numbers without
making mistakes all the way down to how they could make supplies last. Finally
in 1946 this thing was completed<.