numa
美
英 
- 網(wǎng)絡(luò)內(nèi)存訪問;非一致性內(nèi)存訪問(non-uniform memory access);非一致內(nèi)存訪問
例句
The main thing done in the relational database was to use "soft NUMA" and port mapping to get a good distribution of work within the system.
我們在關(guān)系型數(shù)據(jù)庫中完成了被稱為“SoftNUMA”的技術(shù),它通過端口映射在系統(tǒng)內(nèi)部得以獲得良好的分布式工作效果。
All traffic enters through a single port and is distributed on a round-robin basis to any available NUMA node.
所有通信流量都通過一個單獨的端口輸入并分布到任何可用的NUMA節(jié)點。
To understand how pages of memory from the buffer cache are assigned when using NUMA, see Growing and Shrinking the Buffer Pool Under NUMA.
若要了解使用NUMA時如何分配緩沖區(qū)高速緩存中的內(nèi)存頁,請參閱使用NUMA擴展和收縮緩沖池。
Systems with a large number of processors may find it advantageous to recompile against the NUMA user-land API's added in RHEL4.
在擁有大量處理器的系統(tǒng)中,可能會發(fā)現(xiàn)借助RHEL4中所增加的NUMA用戶空間API進行重新編譯會有好處。
NUMA, like SMP, allows users to harness the combined power of multiple processors, with each processor accessing a common memory pool.
numa與smp相似,讓用戶能駕馭多個處理器結(jié)合起來的能力,每個處理器能存取一個公共的存儲器組。
NUMA reduces the contention for a system's shared memory bus by having more memory buses and fewer processors on each bus.
NUMA通過在每個總線使用更多內(nèi)存總線和更少處理器來減少系統(tǒng)共享內(nèi)存總線的爭用。
Any operation running on a single NUMA node can only use buffer pages from that node .
針對單個NUMA節(jié)點執(zhí)行的任何操作都只能使用該節(jié)點中的緩沖區(qū)頁。
The ratio of the cost to access foreign memory over that for local memory is called the NUMA ratio.
訪問外部內(nèi)存的開銷與訪問本地內(nèi)存的開銷比率稱為NUMA比率。
For high-end machines, new features target performance improvements, scalability, throughput, and NUMA support for SMP machines.
對高端的機器來說,新特性針對的是性能改進、可擴展性、吞吐率,以及對SMP機器NUMA的支持。
The number of CPUs within a NUMA node depends on the hardware vendor.
NUMA節(jié)點中的CPU數(shù)量取決于硬件供應(yīng)商。
This provides automatic load balancing among the NUMA nodes .
它提供了NUMA節(jié)點間的自動負載平衡。
On a mail-server benchmark, we show a 39% improvement in performance by automatically splitting the application among multiple NUMA domains.
在郵件服務(wù)器的測試評分中,通過自動在多個NUMA域中切分應(yīng)用程序,我們的性能得到了39%的提升。
XXI. To begin from Romulus : he left no children, and Numa Pompilius left none that could be of use to the republic.
就從羅慕路斯開始吧,他沒有子嗣,努馬·蓬皮利烏斯也沒有留下對國家有用的孩子。
Within a NUMA node, the connection is run on the least loaded scheduler on that node.
在NUMA節(jié)點內(nèi),連接按照該節(jié)點上負載最小的計劃程序運行。
The NUMA architecture was designed to surpass the scalability limits of the SMP architecture .
NUMA體系結(jié)構(gòu)在設(shè)計上已超越了SMP體系結(jié)構(gòu)在伸縮性上的限制。
Not just for SMP or NUMA, but for everything from a single-node UP system to a massively clustered system.
不僅是SMP或NUMA,而是從一個單一的操作系統(tǒng)點發(fā)展到巨大的操作系統(tǒng)群組。
In NUMA systems, each processor is close to some parts of memory and further from others.
在NUMA系統(tǒng)中,每個處理器距某部分內(nèi)存較近而距其他內(nèi)存較遠。
In a NUMA architected system, CPUs are arranged in smaller sub-systems called pods.
在NUMA架構(gòu)的系統(tǒng)中,CPU排列在叫做pods的較小的子系統(tǒng)中。
The NUMA architecture can increase processor speed without increasing the load on the processor bus.
NUMA體系結(jié)構(gòu)可以在不增加處理器總線負載的情況下提高處理器速度。
This topic describes how pages of memory from the buffer pool are assigned when using non-uniform memory access (NUMA).
本主題介紹,在使用非一致性內(nèi)存訪問(NUMA)時,如何分配緩沖池中的內(nèi)存頁。
I design and implement the method of fault-containment and recovery arithmetic, effectively solve the problem of fault in CC -NUMA computer.
設(shè)計并實現(xiàn)了故障限制方法和故障恢復(fù)算法,有效的解決了CC-NUMA計算機的故障處理問題。
NUMA architecture provides a scalable solution to this problem.
NUMA體系結(jié)構(gòu)為此問題提供了可擴展的解決方案。
Because NUMA uses local and foreign memory, it will take longer to access some regions of memory than others. Local memory.
由于NUMA同時使用本地內(nèi)存和外部內(nèi)存,因此,訪問某些內(nèi)存區(qū)域的時間會比訪問其他內(nèi)存區(qū)域的要長。
All NUMA topics have been reorganized for this release.
已重新組織了此版本中的所有NUMA主題。
Applications seeking additional performance gains can use user-land NUMA APIs.
設(shè)法提高性能的應(yīng)用程序可以使用user-landNUMAAPI。
Similarly, buffer pool pages are distributed across hardware NUMA nodes.
同樣,緩沖池頁將跨硬件NUMA節(jié)點進行分布。
On NUMA hardware, some regions of memory are on physically different buses from other regions.
在NUMA硬件上,有些內(nèi)存區(qū)域與其他區(qū)域位于不同的物理總線上。
When using NUMA, the max server memory and min server memory values are divided evenly among NUMA nodes.
使用NUMA時,會在NUMA節(jié)點之間平均劃分maxservermemory和minservermemory的值。
That means when users run out of capacity on their SMP servers, they can move their applications to NUMA servers with relative ease.
這意味著,當(dāng)用戶用盡SMP服務(wù)器的能力時,他們能較容易地將其應(yīng)用程序移到NUMA服務(wù)器上。
NUMA hardware is provided by the computer manufacturer.
NUMA硬件由計算機制造商提供。
Affinitizing connections to specific processors when using Non-Uniform Memory Access (NUMA).
使用非一致內(nèi)存訪問(NUMA)時,將連接與特定處理器關(guān)聯(lián)。
Applications seeking additional performance gains can use user-land NUMA API's.
希望能進一步提高性能的應(yīng)用程序可以使用用戶空間NUMAAPI。
More than one port can be mapped to the same NUMA nodes.
可以將多個端口映射到同一NUMA節(jié)點。
You cannot create a soft-NUMA that includes CPUs from different hardware NUMA nodes.
無法創(chuàng)建包含來自不同硬件NUMA節(jié)點的CPU的軟件NUMA。
Enabling memory location optimizations for NUMA multi-CPU systems (-XX: +UseNUMA).
為NUMA多CPU系統(tǒng)啟用內(nèi)存位置優(yōu)化(-XX:+UseNUMA)。
Soft-NUMA does not provide memory to CPU affinity.
軟件NUMA不提供內(nèi)存與CPU的關(guān)聯(lián)。
Number of pages that come from a different NUMA node.
來自其他NUMA節(jié)點的頁數(shù)。
There is an instance of the Buffer Node object for each NUMA node in use.
對于正在使用的每個NUMA節(jié)點,都有一個BufferNode對象實例。
It allows you to monitor the SQL Server buffer pool page distribution for each non-uniform memory access (NUMA) node.
通過它,您可以監(jiān)視每個非一致性內(nèi)存訪問(NUMA)節(jié)點的SQLServer緩沖池頁分布。
The O(1) scheduler also allows for load-balancing across CPUs and NUMA-aware load-balancing.
0(1)調(diào)度程序還允許跨CPU的負載平衡和NUMA-aware負載平衡。