Parallel Computing(Computer ) Questions and Answers
Question 1. Load balancing is
Involves only those tasks executing a communication operation
It exists between program statements when the order of statement execution affects the results of the program.
It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
None of these
Explanation:-
Answer: Option C. -> It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time. Answer: (c)
Question 2. In shared Memory
Multiple processors can operate independently but share the same memory resources
Multiple processors can operate independently but do not share the same memory resources
Multiple processors can operate independently but some do not share the same memory resources
None of these
Explanation:-
Answer: Option A. -> Multiple processors can operate independently but share the same memory resources Answer: (a)
Question 3. These applications typically have multiple executable object files (programs). While the application is being run in parallel, each task can be executing the same or different program as other tasks. All tasks may use different data
Single Program Multiple Data (SPMD)
Multiple Program Multiple Data (MPMD)
Von Neumann Architecture
None of these
Explanation:-
Answer: Option B. -> Multiple Program Multiple Data (MPMD) Answer: (b)
Question 4. Fine-grain Parallelism is
In parallel computing, it is a qualitative measure of the ratio of computation to communication
Here relatively small amounts of computational work are done between communication events
Relatively large amounts of computational work are done between communication / synchronization events
None of these
Explanation:-
Answer: Option B. -> Here relatively small amounts of computational work are done between communication events Answer: (b)
Question 5. In designing a parallel program, one has to break the problem into discreet chunks of work that can be distributed to multiple tasks. This is known as
Decomposition
Partitioning
Compounding
Both A and B
Explanation:-
Answer: Option D. -> Both A and B Answer: (d)
Question 6. Point-to-point communication referred to
It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.*
It allows tasks to transfer data independently from one another.
None of these
Explanation:-
Answer: Option B. -> It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.* Answer: (b)
Question 7. Shared Memory is
A computer architecture where all processors have direct access to common physical memory
It refers to network based memory access for physical memory that is not common.
Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employe(D)
None of these
Explanation:-
Answer: Option A. -> A computer architecture where all processors have direct access to common physical memory Answer: (a)
Question 8. Data dependence is
Involves only those tasks executing a communication operation
It exists between program statements when the order of statement execution affects the results of the program.
It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
None of these
Explanation:-
Answer: Option B. -> It exists between program statements when the order of statement execution affects the results of the program. Answer: (b)
Question 9. Non-Uniform Memory Access (NUMA) is
Here all processors have equal access and access times to memory
Here if one processor updates a location in shared memory, all the other processors know about the update.
Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
None of these
Explanation:-
Answer: Option C. -> Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories Answer: (c)
Question 10. In the threads model of parallel programming
A single process can have multiple, concurrent execution paths
A single process can have single, concurrent execution paths.
A multiple process can have single concurrent execution paths.
None of these
Explanation:-
Answer: Option A. -> A single process can have multiple, concurrent execution paths Answer: (a)