People who speak two or more languages tend to alternate the language when they are speaking. This particular phenomenon is called code-switching, and it frequently occurs in multicultural society. Automatic speech recognition (ASR) for code-switching speech is a challenging task acoustically and linguistically because of the lack of code-switching data. This work aims to improve code-switching ASR system by improving the language model. We explore the code-switching data augmentation for language modeling by utilizing the ASR decoding lattice to tackle the pronunciation variation and data scarcity problems. We incorporate both acoustic and textual information by pretraining GPT2, a transformer-based language model, with the code-switching ASR decoding lattice. Our work achieves around 2 point absolute word error rate reduction from the baseline n-gram language model, and 0.33 point absolute reduction from the lattice-rescored baseline word error rate.