PFN Unveils MN-Core Infrastructure for the Agentic AI Era
- •PFN held its inaugural MN-Core conference, revealing the full scope of its proprietary chip philosophy and SDK.
- •The company emphasized the critical role of 'Tokens per second' in optimizing infrastructure for Agentic AI.
- •PFN is countering the physical limits of semiconductor miniaturization with near-memory design and AI-automated chip development.
Preferred Networks (PFN) hosted "MN-Core Technology Conference 25," its first technical event focused exclusively on its proprietary AI accelerator, MN-Core. As the global race for computational resources intensifies amid the generative AI boom, PFN revealed its vertically integrated infrastructure, which spans from custom silicon to a full software stack.
This event signaled a clear transition for the company from the research and development phase to a social implementation phase. This is primarily being driven through their commercial AI cloud service, the "Preferred Computing Platform."
The core of the conference sessions was an "architectural challenge" against the physical limits of semiconductor miniaturization, often referred to as the slowing of Moore's Law.
MN-Core utilizes a "near-memory" design philosophy, placing memory extremely close to the processor to minimize data travel distance. This enables the chip to achieve exceptional power efficiency and computational performance simultaneously.
This approach offers a significant advantage for Large Language Models (LLMs), where inference throughput—the speed at which the AI processes data—is critical.
Improving inference speed, or "Tokens per second," is now a vital metric for the realization of "Agentic AI," where AI systems must think autonomously and operate external tools in real-time.
Looking to the future, PFN suggested a shift toward using AI to design the chips themselves. By using AI to automate and accelerate physical design, PFN aims to drastically shorten semiconductor lead times, which typically take years.
This would allow hardware to adapt quickly to rapidly evolving AI algorithms. PFN also noted that even documentation and SDKs are now being optimized for AI models to "read" and understand, rather than just humans.
As the AI development landscape enters this next phase, industry attention is fixed on how PFN’s ability to control both hardware and software