PyTorch – PyTorch https://pytorch.org Tue, 19 Aug 2025 17:55:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://pytorch.org/wp-content/uploads/2024/10/cropped-favicon-32x32.webp PyTorch – PyTorch https://pytorch.org 32 32 How OpenSynth Uses PyTorch to Accelerate Compute for Energy Modelling Applications https://pytorch.org/blog/how-opensynth-uses-pytorch-to-accelerate-compute-for-energy-modelling-applications/ Wed, 14 May 2025 13:00:36 +0000 https://pytorch.org/?p=4049 PyTorch Case Study LF Energy OpenSynth

OpenSynth has recently leveraged PyTorch to improve the experience of its users and community. OpenSynth is an open source community hosted by LF Energy that is democratising access to synthetic energy demand data. 

Access to smart meter data is essential to rapid and successful energy transitions. Researchers, modelers and policymakers need to understand how energy demand profiles are changing, in a system that requires greater real time optimization of demand and supply on the grid. Yet current global energy modeling and policymaking is still largely based on static and highly aggregated data from the past – when energy flowed in one direction, consumer profiles were relatively predictable, and power generation was highly controllable.

The major challenge is that access to demand data is highly restrictive, as a result of privacy protections. Rather than joining industry calls to unlock raw smart meter data through existing mechanisms, by tackling current data regulations and smart meter legislation, OpenSynth believes generating synthetic data is the fastest way to achieve widespread, global access to smart meter datasets.

The community empowers holders of raw smart meter (i.e. demand) data to generate and share synthetic data and models that can be used by researchers, industry innovators and policy-makers. 

PyTorch allowed the OpenSynth community to use GPU compute to speed up computation and use distributed training. End users with access to multiple GPUs can split the dataset into multiple smaller datasets to parallelise compute, further speeding up compute. This allows scaling of training to much bigger datasets than before.

The Business Challenge

Centre for Net Zero, the non-profit that originally developed OpenSynth before it was contributed to LF Energy, has also developed an algorithm called Faraday, available via OpenSynth to its users, that can generate synthetic smart meter data. The Faraday algorithm consists of two components: an AutoEncoder module and a Gaussian Mixture Module

The Gaussian Mixture Model (GMM) of Faraday was originally implemented using scikit-learn’s implementation. Scikit Learn is a popular library used amongst data scientists to train many different machine learning algorithms. However, that implementation does not scale very well on large datasets, as it only supports CPUs (Central Processing Units) – it does not allow accelerated computation using GPUs (Graphical Processing units). GPUs are a more powerful chip that can perform mathematical operations much faster, and is commonly used to train deep learning models. 

Furthermore, it also does not allow any parallelisation. Parallelisation compute means splitting the original dataset into multiple independent and smaller datasets, and training smaller models on each individual dataset, then combining the smaller models into a single model. 

A different implementation was needed that supports both parallel computation and GPU acceleration. 

How OpenSynth Used PyTorch

The OpenSynth community recently ported the GMM module from Faraday to PyTorch. Originally implemented using scikit-learn, this reimplementation enables the use of GPUs for training GMMs, significantly accelerating computational performance.

By leveraging PyTorch’s powerful GPU capabilities, the new GMM module can now handle much larger datasets and faster computation, making it an invaluable tool for practitioners working with large datasets that cannot fit into memory. This update allows users to scale their models and processes more efficiently, leading to faster insights and improved results in energy modeling applications.

A Word from OpenSynth

PyTorch LF Energy OpenSynth Case Study

“Open source is a powerful catalyst for change. Our open data community, OpenSynth, is democratising global access to synthetic energy demand data – unlocking a diversity of downstream applications that can accelerate the decarbonisation of energy systems. PyTorch has an incredible open source ecosystem that enables us to significantly speed up computation for OpenSynth’s users, using distributed GPUs. Without this open source ecosystem, it would have been impossible to implement this change – and slowed down the efforts of those seeking to affect net zero action.” – Sheng Chai, Senior Data Scientist, Centre for Net Zero

Learn More

For more information, visit the LF Energy OpenSynth website.

]]>
How IBM Research Uses PyTorch and TerraTorch to Make Geospatial Computer Vision Accessible for Everyone https://pytorch.org/blog/how-ibm-uses-pt-terratorch/ Thu, 01 May 2025 15:36:58 +0000 https://pytorch.org/?p=3682 Earth Observation-based analytics are becoming essential for understanding our planet — from monitoring deforestation to tracking urban development and analyzing the impacts of climate change. However, the coding and deep learning skills for applying AI models to satellite imagery and earth observation data has traditionally been a major barrier for many practitioners.

By IBM Research’s launch of TerraTorch 1.0, a PyTorch domain library for fine-tuning of Geospatial Computer Vision Foundation Models, we make geospatial AI not only more accessible but also more practical for the wider PyTorch community. Our goal: simplify the process so that any data scientist, researcher, or enthusiast can build powerful geospatial models with ease and low GPU and data processing requirements.

 

The power of foundation models, even with 75-95% of the input data removed, the models do a fantastic job in reconstruction of the input data – therefore learning the underlying physics of our planet in a deep, latent space

The Business Challenge

Our goal was to remove the technical barriers that prevent people from working with satellite imagery, weather and climate data at scale. Together with NASA, we’ve developed the Prithvi family of foundation models. Integrating the latest innovations of AI research using the clean API PyTorch provides has facilitated the job.

We wanted to create a framework that anyone can use to go from raw data to inference ready models in just a few steps.

 

How a weather and climate foundation model created and fine-tuned on PyTorch is used for weather forecasts

How IBM Research Used PyTorch

We’ve built TerraTorch on top of PyTorch, leveraging its dynamic ecosystem to integrate:

  • PyTorch Lightning for clean, scalable training loops
  • TorchGeo for geospatial data handling and transformations (PyTorch transforms)
  • For foundation models like the leading generative multimodal foundation model ‘Terramind’, co-developed by IBM and ESA, and the ‘Prithvi’ family, co-developed by IBM and NASA, TerraTorch has been used to fine-tune all of the downstream geospatial models for satellite imagery, weather and climate data. It includes the family of fine-tuned models that IBM has released as part of Granite. In addition, other interesting foundation models and ecosystem components like Clay, SatMAE, Satlas, DeCur and DOFA are included in TerraTorch.
  • Powerful and state-of-the-art vision transformers to experiment with modern neural network architectures
  • TerraTorch-Iterate build on top of PyTorch, Optuna, MLFlow and Ray Tune for Hyperparameter Optimization (HPO), Neural Architecture Search (NAS) and Foundation Model Benchmarking (GeoBench), where TerraTorch became the reference implementation

The fine-tuning and inference process is completely described in a single YAML config file. There, the architectural building blocks of the model (backbone, neck, decoder, head) are defined. The Model Factory assembles the model using the build-in and custom registries. In addition, the Optimizer and Data Modules are created as defined in the config. Finally, everything is passed to the Lightning Trainer, who executes the task.

With PyTorch’s flexibility, we were able to prototype quickly, iterate on model architectures, and deploy pipelines for a range of geospatial applications — from flood and biomass detection to increasing resolution of climate data, where some of our our work became part of the IBM Granite Geospatial Model Family.

 

Architecture of the Prithvi-EO-2.0-600M foundation model which IBM Research developed together with NASA

Solving AI Challenges with PyTorch

PyTorch helped us to tackle three major challenges:

  • Ease of experimentation: Dynamic computation graphs, automatic differentiation, full abstraction of CUDA and rich visualization tools made it simple to test different models and training strategies.
  • Scalability: With DDP, FSDP, PyTorch Lightning and TorchGeo, we could train models on large-scale datasets without worrying about infrastructure.
  • Community support: PyTorch – the de-facto standard in AI research – with its active community and excellent documentation made it easy to overcome hurdles and stay up to date with the latest advancements in AI research.

A Word from IBM Research

“PyTorch gave me the power to turn complex linear algebra and optimization problems into accessible, shareable solutions for the community. It feels empowering that we’re building and fine-tuning models for anyone curious about understanding our planet through AI.”

— Romeo Kienzler, AI Research Engineer at IBM Research Zurich, Rueschlikon

The Benefits of Using PyTorch

Using PyTorch allowed us to:

  • Build a reproducible, open-source framework for fine-tuning geospatial foundation models
  • Share our work with the community through easy-to-follow notebooks, TerraTorch configuration files, tutorials and model checkpoints on HuggingFace
  • Rapidly iterate over foundation model architectures and deploy fine-tuned models for inference, from research to real-world client products

Learn More

For more information about this project and to explore the code, visit:

]]>
How Intel Uses PyTorch to Empower Generative AI through Intel Arc GPUs https://pytorch.org/blog/how-intel-uses-pytorch-to-empower-generative-ai-through-intel-arc-gpus/ Fri, 24 Jan 2025 05:02:00 +0000 https://pytorch.org/?p=2862 Intel has long been at the forefront of technological innovation, and its recent venture into Generative AI (GenAI) solutions is no exception. With the rise of AI-powered gaming experiences, Intel sought to deliver an accessible and intuitive GenAI inferencing solution tailored for AI PCs powered by Intel’s latest GPUs. By leveraging PyTorch as the backbone for development efforts, Intel successfully launched AI Playground, an open source application that showcases advanced GenAI workloads.

The Business Challenge

Our goal was to deliver an accessible and intuitive GenAI inferencing solution tailored for AI PCs powered by Intel. We recognized the need to showcase the capabilities of the latest GenAI workloads on our newest line of client GPUs. To address this, we developed a starter application, AI Playground, which is open source and includes a comprehensive developer reference sample available on GitHub using PyTorch. This application seamlessly integrates image generation, image enhancement, and chatbot functionalities, using retrieval-augmented generation (RAG) features, all within a single, user-friendly installation package. This initiative not only demonstrates the functionality of these AI workloads but also serves as an educational resource for the ecosystem, guiding developers on effectively leveraging the Intel® Arc™ GPU product line for advanced AI applications. This solution leverages Intel® Arc™ Xe Cores and Xe Matrix Extensions (XMX) for accelerating inferencing.

AI Playground

How Intel Used PyTorch

PyTorch is the core AI framework for AI Playground. We extensively leverage PyTorch’s eager mode, which aligns perfectly with the dynamic and iterative nature of our generative models. This approach not only enhances our development workflow but also enables us to rapidly prototype and iterate on advanced AI features. By harnessing PyTorch’s powerful capabilities, we have created a robust reference sample that showcases the potential of GenAI on Intel GPUs in one cohesive application.

Solving AI Challenges with PyTorch

PyTorch has been instrumental in addressing our AI challenges by providing a robust training and inference framework optimized for discrete and integrated Intel Arc GPU product lines. Choosing PyTorch over alternative frameworks or APIs was crucial. Other options would have necessitated additional custom development or one-off solutions, which could have significantly slowed our time to market and limited our feature set. With PyTorch, we leveraged its flexibility and ease of use, allowing our team to focus on innovation through experimentation, rather than infrastructure. The integration of Intel® Extension for PyTorch further enhanced performance by optimizing computational efficiency and enabling seamless scaling on Intel hardware, ensuring that our application ran faster and more efficiently.

A Word from Intel

With PyTorch as the backbone of our AI Playground project, we achieved rapid development cycles that significantly accelerated our time to market. This flexibility enabled us to iteratively enhance features and effectively align with the commitments of our hardware launches in 2024.

-Bob Duffy, AI Playground Product Manager

PyTorch Case Stidu

The Benefits of Using PyTorch

The biggest benefit of using PyTorch for us is the large PyTorch ecosystem, which connects us with an active and cooperative community of developers. This collaboration has facilitated the seamless deployment of key features from existing open source projects, allowing us to integrate the latest GenAI capabilities into AI Playground. Remarkably, we accomplished this with minimal re-coding, ensuring that these advanced features are readily accessible on Intel Arc GPUs.

Learn More

For more information about Intel’s AI Playground and collaboration with PyTorch, visit the following links:

]]>