The advent of intertwined technology, conjoined with powerful centralized machine algorithms, spawns the need for privacy. The efficiency and accuracy of any Machine Learning (ML) algorithm are proportional to the quantity and quality of data collected for training, which could often compromise the data subject’s privacy. Federated Learning (FL) or collaborative learning is a branch of Artificial Intelligence (AI) that decentralizes ML algorithms across edge devices or local servers. This chapter discusses privacy threat models in ML and expounds on FL as a Privacy-preserving Machine Learning (PPML) system by distinguishing FL from other decentralized ML algorithms. We elucidate the comprehensive secure FL framework with Horizontal FL, Vertical FL, and Federated Transfer Learning that mitigates privacy issues. For privacy preservation, FL extends its capacity to incorporate Differential Privacy (DP) techniques to provide quantifiable measures on data anonymization. We have also discussed the concepts in FL that comprehend Local Differential Privacy (LDP) and Global Differential Privacy (GDP). The chapter concludes with 184open research problems and challenges of FL as PPML with implications, limitations, and future scope.