NUMA
美
英 
- 網絡內存訪問;非一致性內存訪問(non-uniform memory access);非一致內存訪問
例句
The main thing done in the relational database was to use "soft NUMA" and port mapping to get a good distribution of work within the system.
我們在關系型數據庫中完成了被稱為“SoftNUMA”的技術,它通過端口映射在系統內部得以獲得良好的分布式工作效果。
All traffic enters through a single port and is distributed on a round-robin basis to any available NUMA node.
所有通信流量都通過一個單獨的端口輸入并分布到任何可用的NUMA節點。
To understand how pages of memory from the buffer cache are assigned when using NUMA, see Growing and Shrinking the Buffer Pool Under NUMA.
若要了解使用NUMA時如何分配緩沖區高速緩存中的內存頁,請參閱使用NUMA擴展和收縮緩沖池。
Systems with a large number of processors may find it advantageous to recompile against the NUMA user-land API's added in RHEL4.
在擁有大量處理器的系統中,可能會發現借助RHEL4中所增加的NUMA用戶空間API進行重新編譯會有好處。
NUMA, like SMP, allows users to harness the combined power of multiple processors, with each processor accessing a common memory pool.
numa與smp相似,讓用戶能駕馭多個處理器結合起來的能力,每個處理器能存取一個公共的存儲器組。
NUMA reduces the contention for a system's shared memory bus by having more memory buses and fewer processors on each bus.
NUMA通過在每個總線使用更多內存總線和更少處理器來減少系統共享內存總線的爭用。
Any operation running on a single NUMA node can only use buffer pages from that node .
針對單個NUMA節點執行的任何操作都只能使用該節點中的緩沖區頁。
The ratio of the cost to access foreign memory over that for local memory is called the NUMA ratio.
訪問外部內存的開銷與訪問本地內存的開銷比率稱為NUMA比率。
For high-end machines, new features target performance improvements, scalability, throughput, and NUMA support for SMP machines.
對高端的機器來說,新特性針對的是性能改進、可擴展性、吞吐率,以及對SMP機器NUMA的支持。
The number of CPUs within a NUMA node depends on the hardware vendor.
NUMA節點中的CPU數量取決于硬件供應商。
This provides automatic load balancing among the NUMA nodes .
它提供了NUMA節點間的自動負載平衡。
On a mail-server benchmark, we show a 39% improvement in performance by automatically splitting the application among multiple NUMA domains.
在郵件服務器的測試評分中,通過自動在多個NUMA域中切分應用程序,我們的性能得到了39%的提升。
XXI. To begin from Romulus : he left no children, and Numa Pompilius left none that could be of use to the republic.
就從羅慕路斯開始吧,他沒有子嗣,努馬·蓬皮利烏斯也沒有留下對國家有用的孩子。
Within a NUMA node, the connection is run on the least loaded scheduler on that node.
在NUMA節點內,連接按照該節點上負載最小的計劃程序運行。
The NUMA architecture was designed to surpass the scalability limits of the SMP architecture .
NUMA體系結構在設計上已超越了SMP體系結構在伸縮性上的限制。
Not just for SMP or NUMA, but for everything from a single-node UP system to a massively clustered system.
不僅是SMP或NUMA,而是從一個單一的操作系統點發展到巨大的操作系統群組。
In NUMA systems, each processor is close to some parts of memory and further from others.
在NUMA系統中,每個處理器距某部分內存較近而距其他內存較遠。
In a NUMA architected system, CPUs are arranged in smaller sub-systems called pods.
在NUMA架構的系統中,CPU排列在叫做pods的較小的子系統中。
The NUMA architecture can increase processor speed without increasing the load on the processor bus.
NUMA體系結構可以在不增加處理器總線負載的情況下提高處理器速度。
This topic describes how pages of memory from the buffer pool are assigned when using non-uniform memory access (NUMA).
本主題介紹,在使用非一致性內存訪問(NUMA)時,如何分配緩沖池中的內存頁。
I design and implement the method of fault-containment and recovery arithmetic, effectively solve the problem of fault in CC -NUMA computer.
設計并實現了故障限制方法和故障恢復算法,有效的解決了CC-NUMA計算機的故障處理問題。
NUMA architecture provides a scalable solution to this problem.
NUMA體系結構為此問題提供了可擴展的解決方案。
Because NUMA uses local and foreign memory, it will take longer to access some regions of memory than others. Local memory.
由于NUMA同時使用本地內存和外部內存,因此,訪問某些內存區域的時間會比訪問其他內存區域的要長。
All NUMA topics have been reorganized for this release.
已重新組織了此版本中的所有NUMA主題。
Applications seeking additional performance gains can use user-land NUMA APIs.
設法提高性能的應用程序可以使用user-landNUMAAPI。
Similarly, buffer pool pages are distributed across hardware NUMA nodes.
同樣,緩沖池頁將跨硬件NUMA節點進行分布。
On NUMA hardware, some regions of memory are on physically different buses from other regions.
在NUMA硬件上,有些內存區域與其他區域位于不同的物理總線上。
When using NUMA, the max server memory and min server memory values are divided evenly among NUMA nodes.
使用NUMA時,會在NUMA節點之間平均劃分maxservermemory和minservermemory的值。
That means when users run out of capacity on their SMP servers, they can move their applications to NUMA servers with relative ease.
這意味著,當用戶用盡SMP服務器的能力時,他們能較容易地將其應用程序移到NUMA服務器上。
NUMA hardware is provided by the computer manufacturer.
NUMA硬件由計算機制造商提供。
Affinitizing connections to specific processors when using Non-Uniform Memory Access (NUMA).
使用非一致內存訪問(NUMA)時,將連接與特定處理器關聯。
Applications seeking additional performance gains can use user-land NUMA API's.
希望能進一步提高性能的應用程序可以使用用戶空間NUMAAPI。
More than one port can be mapped to the same NUMA nodes.
可以將多個端口映射到同一NUMA節點。
You cannot create a soft-NUMA that includes CPUs from different hardware NUMA nodes.
無法創建包含來自不同硬件NUMA節點的CPU的軟件NUMA。
Enabling memory location optimizations for NUMA multi-CPU systems (-XX: +UseNUMA).
為NUMA多CPU系統啟用內存位置優化(-XX:+UseNUMA)。
Soft-NUMA does not provide memory to CPU affinity.
軟件NUMA不提供內存與CPU的關聯。
Number of pages that come from a different NUMA node.
來自其他NUMA節點的頁數。
There is an instance of the Buffer Node object for each NUMA node in use.
對于正在使用的每個NUMA節點,都有一個BufferNode對象實例。
It allows you to monitor the SQL Server buffer pool page distribution for each non-uniform memory access (NUMA) node.
通過它,您可以監視每個非一致性內存訪問(NUMA)節點的SQLServer緩沖池頁分布。
The O(1) scheduler also allows for load-balancing across CPUs and NUMA-aware load-balancing.
0(1)調度程序還允許跨CPU的負載平衡和NUMA-aware負載平衡。