ECS I Reading into Speaking Computer Architecture

123456789101112131415161718
Across
  1. 1. A system that allows multiple users or programs to access a computer seemingly at the same time by rapidly switching among them, improving efficiency and user interaction.
  2. 3. An operating system technique that allows multiple programs to reside in memory at the same time, enabling the CPU to switch tasks when one program is waiting for I/O, thus improving overall efficiency.
  3. 11. A mechanical switching device that uses an electromagnet to open or close circuits; early computers used relays for processing, but this made them large and very slow.
  4. 13. A processor architecture based on a simplified set of instructions designed to execute rapidly, providing high performance compared to traditional, more complex instruction sets.
  5. 14. A design model for computers characterized by a single memory space for instructions and data, sequential instruction processing, and a central control unit.
  6. 15. A computer architecture that uses hundreds or thousands of processors working simultaneously on different parts of a problem to achieve very high performance.
  7. 17. A method of computation in which data and tasks are divided across multiple connected machines, allowing large datasets or complex workloads to be processed more efficiently.
  8. 18. Technologies for manufacturing integrated circuits that place thousands to millions of transistors on a single chip, enabling the creation of microprocessors and dramatically increasing computing power.
Down
  1. 2. A compact electronic component in which multiple transistors and other elements are fabricated on a single semiconductor chip, enabling faster, smaller, and more affordable computers.
  2. 4. A method of interacting with a computer that uses visual elements such as windows, icons, and menus, allowing users to operate systems more intuitively than with text-only commands.
  3. 5. The use of a graphics processing unit for non-graphics, computationally intensive tasks, enabling massive parallel processing for scientific, engineering, and data-heavy workloads.
  4. 6. Specialized circuitry that performs mathematical operations on real numbers with fractional components, greatly speeding up scientific and engineering computations.
  5. 7. A type of processor that can perform the same operation on large sets of data simultaneously, greatly accelerating scientific and engineering computations.
  6. 8. A form of early computer memory that stored data using tiny magnetized ferrite rings, providing faster, more reliable storage than vacuum-tube or delay-line memory.
  7. 9. A type of CPU optimized to perform mathematical operations on large arrays (vectors) of data at once, making it ideal for scientific and engineering calculations.
  8. 10. A foundational computer architecture idea in which program instructions are stored in the same memory as data, allowing the machine to be reprogrammed without rewiring.
  9. 12. A processor design that allows a CPU to execute multiple instructions during a single clock cycle by using several parallel execution units.
  10. 16. A computing technique that creates virtual versions of hardware or operating systems, allowing multiple independent environments to run on a single physical machine.