Affordable
GPUs For All

As low as 0.12 USD / hour

We solve the
GPU shortage

GPU4US GPU Computing Sharing Project

Rent a Cloud GPU

Instead of buying a bunch of expensive graphics cards, you can simply rent a cloud GPU for as low as  0.12 USD per hour, and get better performance than your local GPUs. You can validate your AI product ideas, test models and open source projects with minimal cost.

Buy an AI Training Pod

We collaborate with hardware vendors to provide an AI Training Pod, a dedicated AI server/GPU server that runs locally. After purchasing the AI Training Pod, you can either exclusively deploy, train, and validate your AI projects with all available resources, or share idle GPU power with others through our GPU Computing sharing project, and earn passive income to subsidize the purchase cost.

Get extra computing power with a scalable GPU computing pool.

When running an AI project, you may be struggling with the additional computing power required during peak periods – buying more GPUs is not cost-effective, but not having enough GPUs affects the user experience. Now you can use our GPU computing sharing project to obtain scalable GPU computing power without purchasing GPUs, and only pay for occupancy time.

Using a fast and stable packaging environment

To solve network issues in certain regions, we provide a fast and stable packaging environment close to the model library. You can quickly build your container image, download models, and synchronize the packaged image to any cloud GPU node, and start your GPU task smoothly.

  • Can’t get a graphics card for AI model training?
  • Just want to validate an idea without spending a fortune on graphics cards?
  • Cloud service providers’ GPU servers are too expensive, and I don’t need that kind of power at all.
  • My purchased graphics cards are always idle, which is a waste of money.
  • My AI projects require flexible GPU computing power as supplement.
  • Having trouble accessing model repositories and open-source projects in certain regions?

AI is booming, GPUs are
always scarce.

FAQ

GPU4US uses data center hosting and distributed sharing technology to improve the availability and balance of GPUs. You are both a user and a contributor, reducing costs such as data center hosting and power consumption through time-sharing computing. In addition, GPU4US is a non-commercial project where our top priority is to improve the availability of GPUs without excessive pursuit of commercial interests.

Please apply for a cloud GPU through the GPU4US website. Our service are open to everyone, so please do not abuse the service or use it to generate content that violates the law.

You will receive a Linux system that can be accessed through SSH, and necessary drivers and runtime environments are pre-installed in the basic image.

No. If you need a long-term and complete GPU, you can purchase an “AI Training Pod” device; otherwise, you can continue to rent cloud GPUs, which are shared by members who have purchased “AI Training Pod”.

“AI Training Pod” is a dedicated device. In order to maximize the utilization of computing resources, it is equipped with a Linux system and specialized GPU computing sharing software. You can log in to manage and deploy application locally through SSH or a web interface, or use a containerization solution to deploy your applications. If you reinstall the system, you CANNOT share your computing resources, NOR can you earn income through GPU computing sharing.

“AI Training Pod” is a dedicated device. In order to maximize the utilization of computing resources, it is equipped with a Linux system and specialized GPU computing sharing software. You can log in to manage and deploy application locally through SSH or a web interface, or use a containerization solution to deploy your application. If you reinstall the system, you CANNOT share your computing resources, NOR can you earn income through GPU computing sharing.

Build your container according to the development specifications, submit the container to the GPU Computing sharing platform, and set the scaling policy. That’s it. It should be noted that because some computing resources are distributed, it may not be possible to guarantee online time, so you need to consider the granularity of task allocation when developing. For example, for a speech recognition application, it is best to split the audio files that need to be recognized into small pieces, calculate and transmit the results in real-time based on the small pieces.

Yes, it does. We do not host GeForce series graphics cards in data centers, and the GPUs you access comply with NVIDIA’s end-user licensing agreement, so you don’t have to worry about possible legal issues.
GPU4US does not directly access or handle the data provided or generated by users during GPU usage. However, due to the nature of GPU computing sharing, your data may be transmitted to other users’ “AI Training Pod”, so please avoid using this project to handle sensitive data. When your rental task is completed, the data will be completely deleted.

Waiting list

Currently, the GPU computing resources of GPU4US are still under preparation and will be launched soon. If you wish to use GPU4US services as soon as possible, please fill out the form to join the waiting list.

About Us

We are a startup team specializing in sharing computing and network resources. In 2012, we founded the well-known WiFi sharing project JooMe, producing over one million WiFi sharing routers and provided easy-to-access WiFi networks to tens of millions of users.
 
Our WiFi sharing routers also have built-in PCDN software and storage systems, which utilize users’ idle bandwidth to provide a peer-to-peer content distribution network while sharing the network.
 
The GPU4US project is our new sharing project developed to meet the demand for GPU usage.
Scroll to Top