Firefly Server Series - High Computing Power Solutions

0 comments

The Firefly team focuses on developing and applying server products centered on cloud computing, big data, and edge computing. It has successfully launched ARM servers that support multiple computing units, cover multiple industries and fields, and provide customers with high-quality solutions.

Firefly Server Series

AI Edge Computing Server

AI Edge Computing Server
  • Server computing power ranges from 60 to 1570 TOPS, supporting the private deployment of large language models under the Transformer architecture, such as Gemma-2B, ChatGLM3-6B, Qwen-1.8B, and other large language models.
  • It supports mainstream modern AI models, such as robotics, large language, vision, and image generation models.

AI Edge Computing Server Products

Server Model System on Chip  NPU
CSB1-N10NOrinNX NVIDIA Jetson Orin NX 1570 TOPS
CSB1-N10NOrinNano NVIDIA Jetson Orin Nano 670 TOPS
CSB1-N10R3576 Rockchip RK3576 60 TOPS
CSB1-N10R3588 Rockchip RK3588 60 TOPS
CSB1-N10S1688 BM1688 160 TOPS
CSA1-N8S1684X BM1684X 256 TOPS
CSA1-N8S1684 BM1684 140.8 TOPS

 

Android Cloud Computing Server

Android Cloud Computing Server
  • Up to 72 computing nodes can be built in, and the Benchmark score of a single computing unit is as high as 639132, which can easily run various large-scale games; it supports multiple mobile games, PC games, and Android simulators to run at the same time, without pressure;
  • Suitable for cloud testing, cloud office, cloud computing, private domain traffic operation, cross-border e-commerce promotion, short video matrix marketing, multiple live broadcasts running simultaneously, and other scenarios.

Android Cloud Computing Server Products

Server Model System on Chip   NPU
CSR2-N72R3588S RK3588S 432 TOPS
CSR2-N72R3399 RK3399 /
CSR1-N10R3588S RK3588S 60 TOPS
CSR1-N10R3399 RK3399 /

 

Firefly Server Basic Features

Equipped With Backend Management System

  • Equipped with aBMC intelligent management system, it can easily complete real-time monitoring, software configuration, hardware configuration, troubleshooting, abnormal alarm, system upgrade, remote operation, and maintenance, and can provide secondary development.

Effectively Reduce Operation And Maintenance Costs

  • The server highly integrates computing units, storage, universal serial bus, network controller, power controller, and sensors into one, reducing users' procurement, development, and operating costs.

Convenience And Easy Development

  • Provides SDK deep learning development toolkit, a series of software tools such as underlying driver environment, compiler, inference deployment tools, etc. Supports mainstream network models and custom operator development, Docker containerization, and rapid deployment of algorithm applications.


Firefly Server Application Scenarios

Server products can be widely used in edge computing, cloud computing, cloud storage, big data, artificial intelligence, industrial data acquisition and control, smart parks, smart mining areas, and other scenarios.

Private Deployment Of Large Language Models

  • Supports private deployment of modern mainstream AI models, such as DeepSeek, Llama3, ChatGLM3, Qwen-1.8B, etc., for better application in AI edge computing scenarios.

Smart Industrial Park

  • Through Internet of Things technology and data analysis, efficient interaction and intelligent management between devices and between devices and people can be achieved, improving the park's operational efficiency and the living and working experience.

Smart Mine

  • Smart mines use advanced sensors, automation equipment, and data analysis technologies to achieve intelligent monitoring, optimization and management of mining production processes.

 

More Technology Support

Server products support the online experience of aBMC overview of the entire device, viewing server operation status, monitoring, and analysis; it also supports software, hardware, and appearance customization. Please contact us for more cooperation needs.

 


The Best NVIDIA Jetson Edge Computers in 2025

Putting a large language model into a small box: AIBOX-1684X embedded PC

Leave a comment

Please note, comments need to be approved before they are published.