Across
- 4. Which function returns the total number of processes in the communicator?
- 7. Which function initializes the MPI environment?
- 9. What is the default scope of variables in OpenMP?
- 11. What directive is used for loop parallelization in OpenMP?
- 12. How do you specify the number of threads to use in a parallel region?
- 13. What function is used to send a message from one process to another?
- 15. Which directive specifies a parallel region in OpenMP?
- 16. Which MPI datatype represents contiguous blocks of memory?
- 18. What is a collection of thread blocks called?
- 19. What clause is used to specify a reduction operation in OpenMP?
- 20. What is a group of threads called in CUDA?
- 22. What environment variable can be used to set the number of threads dynamically in OpenMP?
- 23. What directive is used for task parallelism in OpenMP?
- 24. Which directive is used to synchronize threads in OpenMP?
- 25. Which function is used to free GPU memory?
Down
- 1. What is the purpose of MPI_Barrier function?
- 2. What keyword is used to declare a variable as private in OpenMP?
- 3. What does OpenMP stand for?
- 5. What function is used to perform a reduction operation across all processes?
- 6. What function is used to perform a collective operation where data is gathered from all processes?
- 8. What clause is used to specify a shared variable in OpenMP?
- 9. What clause is used to control how loop iterations are distributed among threads?
- 10. Which function is used to perform a collective operation where data is scattered from one process to all others?
- 14. What function is used to receive a message?
- 16. What function is used to obtain the rank of the calling process?
- 17. What keyword is used to declare a CUDA kernel?
- 21. What does MPI stand for?
