完善资料让更多小伙伴认识你,还能领取20积分哦, 立即完善>
大家好,
我已经通过网格k1通过openstack用kvm创建的windows 7 vm。我已经下载了驱动程序.Driver i downloaed:http://www.nvidia.com/download/driverResults.aspx/98543/en-us 但是检查了设备状态,它说设备状态:此设备找不到足够的可用资源。 (守则12)。 你们有好的建议吗? 亚瑟 谢谢& 问候 以上来自于谷歌翻译 以下为原文 Hi all, I have passed through the grid k1 to windows 7 vm created by openstack with kvm.I have downloaded driver .Driver i downloaed: http://www.nvidia.com/download/driverResults.aspx/98543/en-us But checked with device status,it said Device Status: This device cannot find enough free resources that it can use. (Code 12). Do you guys have good suggestions? Arthur Thanks & Regards |
|
相关推荐
4个回答
|
|
我怎样才能找到合适的插槽,总线,网格域k1。
alias_name = etree.Element(“alias”,name = self.name) dev.append(ALIAS_NAME) address_with_type = etree.Element(“地址”, 类型= 'PCI', domain ='0x'+ self.domain, bus ='0x'+ self.bus, slot ='0x06',#+ self.slot, function ='0x'+ self.function) 以上来自于谷歌翻译 以下为原文 how can i find the right slot, bus,domain of grid k1. alias_name = etree.Element("alias", name=self.name) dev.append(alias_name) address_with_type = etree.Element("address", type='pci', domain='0x' + self.domain, bus='0x' + self.bus, slot='0x06' ,#+ self.slot, function='0x' + self.function) |
|
|
|
嗨亚瑟,
你能检查一下你得到了什么 [root @ abcd] #lspci -vvvv -d“10de:*”| grep地区 以上来自于谷歌翻译 以下为原文 Hi Arthur, Can you check what you get from [root@abcd]# lspci -vvvv -d "10de:*" | grep Region |
|
|
|
Openstack Server的输出?而不是vm:
[root @ localhost libvirt] #lspci -vvvv -d“10de:*”| grep地区 区域0:内存为94000000(32位,不可预取)[size = 16M] 区域1:内存为33ff0000000(64位,可预取)[size = 128M] 区域3:内存为33ff8000000(64位,可预取)[size = 32M] 区域5:5000的I / O端口[size = 128] 区域0:内存为93000000(32位,非预取)[size = 16M] 区域1:内存为33fe0000000(64位,可预取)[size = 128M] 区域3:内存为33fe8000000(64位,可预取)[size = 32M] 区域5:4000的I / O端口[size = 128] 区域0:内存为92000000(32位,不可预取)[size = 16M] 区域1:内存为33fd0000000(64位,可预取)[size = 128M] 区域3:内存为33fd8000000(64位,可预取)[size = 32M] 区域5:3000的I / O端口[size = 128] 区域0:内存为91000000(32位,不可预取)[size = 16M] 区域1:内存为33fc0000000(64位,可预取)[size = 128M] 区域3:内存为33fc8000000(64位,可预取)[size = 32M] 区域5:2000的I / O端口[size = 128] 我无法理解的一件事是k1传递给vm.so命令lspci应该在vm或openstack服务器上执行? 以上来自于谷歌翻译 以下为原文 Output from Openstack Server?not the vm: [root@localhost libvirt]# lspci -vvvv -d "10de:*" | grep Region Region 0: Memory at 94000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at 33ff0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at 33ff8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 5000 [size=128] Region 0: Memory at 93000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at 33fe0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at 33fe8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 4000 [size=128] Region 0: Memory at 92000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at 33fd0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at 33fd8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 3000 [size=128] Region 0: Memory at 91000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at 33fc0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at 33fc8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 2000 [size=128] One thing I can not understand is k1 passed to vm.so the command lspci should be executed on vm or the openstack server? |
|
|
|
PCI(x86)来宾子系统对空间(内存和io)有一些遗留限制。
通常没有足够的“pci mmio孔”(例如,在用于映射32位PCI设备存储器(例如帧缓冲器)的第一4G物理可寻址存储器空间的末尾处的预留存储器“孔”~512MB)。 “pci mmio hole”的大小取决于模拟的芯片组和kvm和/或qemu和/或客户BIOS的参数(取决于版本)。 试一试: - 引导访客窗口并总结PCI设备的已分配内存使用情况 - 尝试降低访客窗口的pci设备数量 - 尝试降低客户窗口的ram(例如~3GB RAM) - 尝试找到kvm / qemu的参数来为客户扩展“pci mmio hole”(我不是kvm知道但是在xen / qemu下它是客人的“mmio_hole”参数) - 尽量避免使用32位PCI。 为客户启用pci映射(客户BIOS的参数)(NVidia应该说是否可能并支持K1(例如支持的64位PCI条)和kvm(可能需要K1卡上的某些vbios + PLX pcie桥接, 例如。“nvidia-smi -q | egrep'版本|固件'”)) M.C> PS: https://en.wikipedia.org/wiki/PCI_hole http://techfiles.de/dmelanchthon/files/memory_hole.pdf 以上来自于谷歌翻译 以下为原文 PCI (x86) guest subsystem has some legacy limits to space (memory and io). There is usually insufficient "pci mmio hole" (eg. reserved memory "hole" ~512MB at the end of first 4G of physical addressable memory space for mapping 32bit PCI device memory (for example framebuffer)). The size of "pci mmio hole" depends on emulated chipset and parameters of kvm and/or qemu and/or guest bios (version dependent). What to try: - boot guest windows and summarize assigned memory usage of PCI devices - try to lower number of pci devices for guest windows - try to lower ram of guest windows (eg. ~3GB RAM) - try to find parameter of kvm/qemu to extend "pci mmio hole" for guest (I am not kvm aware but under xen/qemu it is "mmio_hole" parameter for guest) - try to avoid use of 32bit PCI eg. enable pci mapping over 4G for guest (an parameter of guest bios) (NVidia should say if it is possible and supported for K1 (eg. supported 64bit PCI bar) and kvm (maybe some vbios+PLX pcie bridge on K1 card upgrade needed, eg. "nvidia-smi -q | egrep 'Version|Firmware'")) M.C> PS: https://en.wikipedia.org/wiki/PCI_hole http://techfiles.de/dmelanchthon/files/memory_hole.pdf |
|
|
|
只有小组成员才能发言,加入小组>>
使用Vsphere 6.5在Compute模式下使用2个M60卡遇到VM问题
3145 浏览 5 评论
是否有可能获得XenServer 7.1的GRID K2驱动程序?
3561 浏览 4 评论
小黑屋| 手机版| Archiver| 电子发烧友 ( 湘ICP备2023018690号 )
GMT+8, 2025-1-9 14:35 , Processed in 0.764428 second(s), Total 81, Slave 65 queries .
Powered by 电子发烧友网
© 2015 bbs.elecfans.com
关注我们的微信
下载发烧友APP
电子发烧友观察
版权所有 © 湖南华秋数字科技有限公司
电子发烧友 (电路图) 湘公网安备 43011202000918 号 电信与信息服务业务经营许可证:合字B2-20210191 工商网监 湘ICP备2023018690号