Fast processing for extremely large-scale graph, which consists of millions to trillions of vertices and 100 billions to 100 trillions of edges, is becoming increasingly important in various domains such as health care, social networks, intelligence, system biology, and electric power grid, etc. The GIM-V algorithm based on MapReduce programing model is designed as general graph processing method for supporting petabyte-scale graph data. On the other hand, recent large-scale data-intensive computing systems tend to employ GPU accelerators to gain good peak performance and high memory bandwidth; however, the validity of acceleration, including optimization techniques, of the GIM-V algorithm using GPUs is an open problem. To address the problem, we implemented a GPU-based GIM-V application. We conducted our implementation using single node (12 hyper-threaded CPU cores, 1 GPU). The results showed that our GPU-based implementation performed 8.80 to 39.0 times faster than the original Hadoop-based GIM-V implementation (PEGASUS), and 2.72 times faster in the map stage than the CPU-based naive implementation. We also observed that the total elapsed time of our implementation introduces significant load imbalance between threads in a GPU, which causes 1.52 times performance degradation than the CPU-based implementation.