TechNet Cyber Supporting Partner Opportunities

Stand Out in the Crowd! If branding, lead generation and market visibility are important to your organization, the supporting partner opportunities available at TechNet Cyber 2022 are exactly what you need. This event attracts over 4,000 cyber security professionals, who want to see the leading industry solutions, and take part in networking and business building opportunities.

Have an idea for a supporting partner opportunity you don't see listed below? Let us know! We are happy to discuss possibilities with you. Contact us today!

First Right of Refusal is currently active for several supporting opportunities. Please check back after Wednesday, March 9th to view any new opportunities that may become available. Deadline to purchase supporting partner opportunities is March 25th.

Cerebras Systems  

Sunnyvale,  CA 
United States
  • Booth: 3004

Cerebras Systems makes the world’s most powerful AI and HPC accelerator system, removing roadblocks to advances in government services, research, policy, and security. Our systems are doing groundbreaking work at leading institutions including Argonne National Laboratory, National Energy Technology Laboratory and Lawrence Livermore National Laboratory. We offer cluster-scale acceleration in a single, easy-to-program device so your researchers can focus on accelerating AI in the public interest.


  • Cerebras CS-2
    The CS-2 is the industry’s fastest AI accelerator. It reduces training times from months to minutes, and inference latencies from milliseconds to microseconds. CS-2 requires a fraction of the space and power of graphics processing unit-based AI compute....

  • The Cerebras CS-2 features 850,000 AI optimized compute cores, 40GB of on-chip SRAM, 20 PB/s memory bandwidth and 220Pb/s interconnect, all enabled by purpose-built packaging, cooling, and power delivery. It is fed by 1.2 terabits of I/O across 12 100Gb Ethernet links. Every design choice has been made to accelerate deep learning, reducing training times and inference latencies by orders of magnitude.

    The CS-2 is powered by the largest processor ever built — the industry’s only 2.6 trillion transistor silicon device. The Cerebras Wafer Scale Engine 2 (WSE-2) delivers more AI optimized compute cores, more fast memory, and more fabric bandwidth than any other deep learning processor in existence. At 46,225 mm^2, WSE-2 is 56 times larger than the largest graphics processing unit. The WSE-2 contains 123x more compute cores and 1,000x more high performance on chip memory.