finals

1234567891011121314151617181920
Across
  1. 2. A mathematical preliminary that is used for averaging rates or ratios.
  2. 7. A synthetic benchmark that focuses on string manipulation and integer operations.
  3. 9. A concept in parallel computing where memory access time is approximately the same for each processor.
  4. 11. A Flynn’s taxonomy that is a non-parallel computer.
  5. 16. It is a software capability that enables a program to work with multiple threads at the same time.
  6. 17. A mathematical preliminary that gives many ways to rate the overall performance of system and performance of its constituent components.
  7. 18. A superscalar limitation that cannot execute instruction after a conditional branch in parallel with instruction before a branch
  8. 19. It is an alternative architecture that allows several instructions to be executed at the same time but in different pipeline stages.
  9. 20. It is the process of breaking the code into small chunks and timing each of these chunks to determine which of them consume the most time.
Down
  1. 1. It is the distributing work among tasks so that all tasks are kept busy all of the time.
  2. 3. A parallel architecture concept that permits multiple independent threads to execute simultaneously on the same core.
  3. 4. It is the amount of time required to coordinate parallel tasks.
  4. 5. It is the science of making objective assessments of the performance of one of system over another.
  5. 6. An execution policy where instructions are issued in the exact order that would correspond to sequential execution; results are written in the same order.
  6. 8. It is a solution to the dependency problem of superscalar where new register are allocated to values.
  7. 10. It is a parallel architecture memory that is also called as loosely-coupled multiprocessor system.
  8. 12. A data dependency that exists when the output of one instruction is required as an input to a subsequent instruction.
  9. 13. It is the simultaneous use of multiple compute resources to solve a computation problem.
  10. 14. It is a step in designing parallel algorithms wherein it group tasks into larger tasks.
  11. 15. It is any computer with several processors.