![]() |
Equipped with up to eight AMD Field-Programmable Gate Arrays (FPGAs), AMD EPYC (Milan) processors with up to 192 cores, High Bandwidth Memory (HBM), up to 8 TiB of SSD-based instance storage and up to 2 TiB of memory, the new F2 instances are available in in two sizes and are ready to accelerate your genomics, multimedia processing, big data, satellite communications, networking, silicon simulation and live video.
A quick recap of FPGAs
Here’s how I explained the FPGA model when we looked at the first generation of FPGA-powered Amazon Elastic Compute Cloud (Amazon EC2) instances.
One of the most interesting paths to a custom hardware solution is known as a Field Programmable Gate Array, or FPGA. Unlike a purpose-built chip that is designed with a single function in mind and then hardwired to implement it, an FPGA is more flexible. After plugging into the socket on the PC board, it can be programmed in the field. Each FPGA contains a fixed, finite number of simple logic gates. Programming FPGAs is “simply” a matter of connecting them and creating the required logic functions (AND, OR, XOR, etc.) or memory elements (flip-flops and shift registers). Unlike a CPU, which is essentially serial (with some parallel elements) and has fixed size instructions and data paths (typically 32 or 64 bits), an FPGA can be programmed to perform many operations in parallel, and the operations themselves can be almost anything widths, large or small.
Since that launch, AWS customers have used F1 instances to host many different types of applications and services. With a newer FPGA, higher processing power and more memory bandwidth, the new F2 instances are an even better host for highly parallelizable and computationally intensive tasks.
Each AMD Virtex UltraScale+ HBM VU47P FPGA has 2.85 million system logic cells and 9,024 DSP slices (up to 28 TOPs of DSP processing power when processing INT8 values). The FPGA Accelerator Card associated with each F2 instance provides 16 GiB of High Bandwidth Memory and 64 GiB of DDR4 memory per FPGA.
Inside the F2
F2 instances are powered by 3rd generation AMD EPYC (Milan) processors. Compared to F1 instances, they offer up to 3x more processor cores, up to twice the system memory and NVMe storage, and up to 4x the network bandwidth. Each FPGA comes with 16 GiB High Bandwidth Memory (HBM) with bandwidth up to 460 GiB/s. Here are the instance sizes and specs:
The name of the instance | vCPU |
FPGA |
FPGA memory HBM/DDR4 |
Instance Memory |
NVMe storage |
EBS bandwidth |
Network bandwidth |
f2.12x wide | 48 | 2 | 32 GiB / 128 GiB |
512 GiB | 1900 GiB (2x 950 GiB) |
15 Gbps | 25 Gbps |
f2.48x wide | 192 | 8 | 128 GiB / 512 GiB |
2.048 GiB | 7600 GiB (8x 950 GiB) |
60 Gbps | 100 Gbps |
High end f2.48x wide instance supports AWS Cloud Digital Interface (CDI) for reliable transfer of uncompressed live video between applications with inter-instance latency as low as 8 milliseconds.
Creating FPGA applications
The AWS EC2 FPGA Development Kit contains the tools you’ll use to develop, simulate, debug, compile, and run your hardware-accelerated FPGA applications. You can run the FPGA Developer AMI on a memory- or compute-optimized instance for development and simulation, and then use the F2 instance for final debugging and testing.
The tools included in the developer kit support a variety of development paradigms, tools, acceleration languages, and debugging capabilities. Regardless of your choice, you end up creating an Amazon FPGA image (AFI) that contains your own acceleration logic and an AWS Shell that implements access to the FPGA’s memory, PCIe bus, interrupts, and external peripherals. You can deploy AFI to any number of F2 instances, share with other AWS accounts, or publish to AWS Marketplace.
If you’ve already built an application that runs on F1 instances, you’ll need to update your development environment to use the latest AMD tools, then rebuild and verify before upgrading to F2 instances.
FPGA instance in action
Here are some great examples of how F1 and F2 instances can support unique and highly demanding tasks:
Genomics – Multinational pharmaceutical and biotech company AstraZeneca used F1 instances to build the world’s fastest genomic pipeline, capable of processing more than 400,000 whole-genome samples in less than two months. They will adopt Illumina DRAGEN for F2 to achieve better performance at a lower cost while accelerating disease detection, diagnosis and treatment.
Satellite communication – Satellite operators are moving from inflexible and expensive physical infrastructure (modulators, demodulators, combiners, hubs, etc.) to agile, software-defined solutions powered by FPGAs. Using digital signal processor (DSP) elements on the FPGA, these solutions can be reconfigured in the field to support new waveforms and meet changing requirements. Key F2 features such as support for up to 8 FPGAs per instance, generous amounts of network bandwidth, and Data Plan Development Kit (DPDK) support using virtual Ethernet can be used to support parallel processing of multiple complex waveforms.
Analytics – NeuroBlade’s SQL Processing Unit (SPU) integrates with Presto, Apache Spark, and other open source query engines to provide faster query processing and superior query throughput efficiency when running on F2 instances.
Things you should know
Here are a few final things you should know about F2 instances:
Regions – F2 instances are available today in the US East (N. Virginia) and Europe (London) AWS regions with plans to expand availability to other regions over time.
Operating systems – F2 instances are Linux only.
Purchase options – F2 instances are available as On-Demand, Spot, Savings Plan, Dedicated Instance and Dedicated Host.
— Jeff;