QIP 2025
Across
- 2. What keyword is used to declare a CUDA kernel?
- 6. How do you specify the number of threads to use in a parallel region?
- 10. Which function initializes the MPI environment?
- 11. What keyword is used to declare a variable as private in OpenMP?
- 14. What does MPI stand for?
- 16. What directive is used for loop parallelization in OpenMP?
- 17. What clause is used to control how loop iterations are distributed among threads?
- 21. What is a collection of thread blocks called?
- 22. What does OpenMP stand for?
- 24. What clause is used to specify a reduction operation in OpenMP?
- 25. Which function is used to perform a collective operation where data is scattered from one process to all others?
- 26. What function is used to receive a single message in MPI point-to-point communication?
Down
- 1. What function is used to obtain the rank of the calling process?
- 3. What environment variable can be used to set the number of threads dynamically in OpenMP?
- 4. Which directive is used to synchronize threads in OpenMP?
- 5. What is the default scope of variables in OpenMP?
- 7. What function is used to perform a collective operation where data is gathered from all processes?
- 8. What function is used to perform a reduction operation across all processes in MPI?
- 9. What function is used to send a message from one process to another?
- 12. Which directive specifies a parallel region in OpenMP?
- 13. Which function returns the total number of processes in the communicator?
- 15. What clause is used to specify a shared variable in OpenMP?
- 18. Which function is used to free GPU memory?
- 19. What is a group of threads called in CUDA?
- 20. What is the purpose of MPI_Barrier function?
- 23. What directive is used for task parallelism in OpenMP?