Hyper v не видит raid
Okay, so I finally got my "new" server from our corporate masters, and although it's still older than dirt (in server terms) it is functional, with 16GB of RAM (the max it will hold). Apparently my highers are stuck in the 90s, IT wise, as they sent me this thing with six 140GB SCSI HDDs configured as RAID5. So far I have:
- Captured the Server2008 product key for reuse.
- Rebuilt the RAID Array as RAID10. (Lost some space, but we don't actually need much.)
- Installed a 60GB HDD for use with Hyper-V.
- Installed Hyper-V Server 2012 R2 on the 60GB drive (which took some doing, as the server didn't want to boot from anything, for some reason).
- Installed 5Nine on my machine because I couldn't get anything else to work to configure the new Server. (We aren't authorised to use Windows 8+, so I can't go that route.)
And now I'm stuck.
How do I tell Hyper-V to build the VMs on the Raid Array? 5Nine doesn't seem to see it? And I can't find any instructions for building anything in the Hyper-V Server CLI. I can list the attached drives from the CLI, but the RAID Array isn't listed. Instead all I get are the boot drive (C:), the physical cd/dvd drive (D:), and 4 "CD drives" (E:-H:) which aren't ready and I think are the virtual drives the BIOS listed at me.
So, can anyone point me in the right direction to get this resolved so I can build my VMs? I've spent two days on this already just getting to this point, so anything that might help would be greatly appreciated.
Greg Strickland
Contest ends 2022-05-15 Contests Complete a survey about your desktop and or gaming PC(s) Contest Details View all contests
Semicolon
21 Replies
Da_Schmoo
Have you installed the driver for the Raid controller on the host? If so, have you created a partition on the array through the OS?
you should connect to the server using a windows 8 box. i dont think the server admin tools either from a windows box or another server with a GUI. from the cmd on the host you should be able to start up diskpart to see if the remaining space is up and running. Once you have built the storage LUN for the vms you should be able to create your virtual machines.
what type of server is it? Also what Processor is the system using. If it is not a virtual capable processor all of this work might be for not.
Are you using Diskpart to view the disks? If so, does the "list disk" command show the array at all? I suspect it may be showing as offline under status.
Also, I've been able to manage HyperV2012 (non-R2) from my windows 7 machine, but not the R2 version. Seems they changed something to prevent that.
Semicolon
1. You're not too far into this. well, I mean you got two days into it, but you don't have much going right now. Blow it away and install Hyper-V on a USB key and leave the whole RAID10 for the VMs or remove what you have, and install it all on a single volume/partition - don't split the array.
2. you may need to use diskpart (or whatever the Powershell equivalent is in 2012/R2) to initialize, format and assign a letter to the volume.
Semicolon
Next step - without a 8/8.1/2012/R2 box to manage this hypervisor from, you're gonna need this:
But then again, five9 should be able to take care of you once you get HyperV to see the rest of the drivespace.
OP Greg Strickland
Da_Schmoo wrote:
Have you installed the driver for the Raid controller on the host? If so, have you created a partition on the array through the OS?
Er, no. All I'd done so far was actually set up the Array. Is that something I could do from the CLI?
s.bos wrote:
you should connect to the server using a windows 8 box. i dont think the server admin tools either from a windows box or another server with a GUI. from the cmd on the host you should be able to start up diskpart to see if the remaining space is up and running. Once you have built the storage LUN for the vms you should be able to create your virtual machines.
what type of server is it? Also what Processor is the system using. If it is not a virtual capable processor all of this work might be for not.
As I mentioned, Win8 is a nogo. We aren't allowed to use it, and we don't have any licenses for it. 5Nine is supposed to be able to manage it even from a Win7 box, which is why I snagged it after I couldn't figure out how to do anything from the CLI.Following your diskpart suggestion, I see that "list disk" reports a Disk0 of 55GB and Disk1 of 407GB, so it sees the Array. But "list volume" only shows the 55GB drive, the physical dvd drive, and the 4 virtual cd drives, and "list partition" says there aren't any partitions on Disk1. So I'm guessing I need to build a partition on Disk1? Am I correct in thinking it's just the one partition for all the VMs?
ppurcell9672 wrote:
Are you using Diskpart to view the disks? If so, does the "list disk" command show the array at all? I suspect it may be showing as offline under status.
Also, I've been able to manage HyperV2012 (non-R2) from my windows 7 machine, but not the R2 version. Seems they changed something to prevent that.
OP Greg Strickland
Semicolon wrote:
1. You're not too far into this. well, I mean you got two days into it, but you don't have much going right now. Blow it away and install Hyper-V on a USB key and leave the whole RAID10 for the VMs or remove what you have, and install it all on a single volume/partition - don't split the array.
2. you may need to use diskpart (or whatever the Powershell equivalent is in 2012/R2) to initialize, format and assign a letter to the volume.
Unfortunately, I don't have any USB sticks large enough. The bigged one that I own personally is only 4GB, and the only one here at work is 128MB. The IT budget here is. complicated. However! I do not have Hyper-V installed on the RAID Array, but on a seperate 60GB SATA HDD that I added. If I understand things correctly, I can trade that for the USB stick later, after I get this up and running.Working on the diskpart bit now.
Semicolon wrote:
Next step - without a 8/8.1/2012/R2 box to manage this hypervisor from, you're gonna need this:
But then again, five9 should be able to take care of you once you get HyperV to see the rest of the drivespace.
Hopefully 5nine will put it together, I already have powershell here so I can also doublecheck that I have that bit.
Semicolon
В распакованном каталоге присутствует инструкция, как добавить драйвер в образ посредством которого когда снова установим ESXi на флешку, гипервизор посредством имеющихся драйверов на установленный в систему RAID контроллер сможет работать с дисками:
- RAID 1 = 2 по 500Gb
- RAID 1 = 2 по 2Tb
Теперь загружаю виртуальную машину Virtualbox с осью Windows 7 на борту и подготавливаю рабочее окружение для сборки с учетом скачанных драйверов на RAID контроллер:
Для сборки своего образа ESXi потребуется:
Разрешаю запуск PowerShell скриптов:
Пуск — Все программы — Стандартные — Командная строка и через правый клик на ней запускаем «Запуск от имени администратора»
C:\Windows\system32>cd /d C:\Windows\System32\WindowsPowerShell\v1.0
C:\Windows\system32>powershell
PS C:\Windows\System32\WindowsPowerShell\v1.0>set-executionpolicy remotesigned
Устанавливаю в систему пакет: PowerShell 3.0 (Windows6.1-KB2506143-x86), без предыдущего не будет и этого.
Устанавливаю в систему пакет: VMware-PowerCLI-6.0.0-3056836.exe
Устанавливаю в систему пакет: 7zip
На данной рабочей станции потребуется доступ в интернет
- Через общие папки передаю с основной системы на гостевую скачанный файл драйвера и найденный на просторах интернета скрипт: ESXi-Customizer-PS-v2.4.ps1
- md5sum ESXi-Customizer-PS-v2.4.ps1 5af8f83ec08faaed500294b69b920d0a ESXi-Customizer-PS-v2.4.ps1
Пуск — Все программы — Vmware — Vmware vSphere PowerCLI и через правый клик «Запуск от имени администратора»
Создаю папку Driver и помещаю в нее vib файл
PowerCLI C:\> mkdir C:\Drivers
В данную папку помещаю файл драйвера для моего RAID контроллера :
vmware-esxi-drivers-scsi-aacraid-550.5.2.1.40301.-1.5.5.1331820.x86_64.vib
PowerCLI C:\> .\ESXi-Customizer-PS-v2.4.ps1 -obDir .\Drivers -sip -v55
Исключение при задании «windowsize» : «Ширина окна не должна быть больше 80.
Parameter name: value.Width
Actual value was 120.»
+ CategoryInfo : NotSpecified: (:) [], SetValueInvocationException
Script to build a customized ESXi installation ISO or Offline bundle using the VMware PowerCLI ImageBuilder snapin
(Call with -help for instructions)
Logging to C:\Users\aollo\AppData\Local\Temp\ESXi-Customizer-PS.log …
Running with PowerShell version 3.0 and VMware vSphere PowerCLI 6.0 Release 2 build 3056836
Connecting the VMware ESXi Online depot … [OK]
Getting Imageprofiles, please wait … [OK]
Select Base Imageprofile:
Enter selection: 1
Using Imageprofile ESXi-5.5.0-20151204001-standard …
(dated 11/18/2015 20:26:01, AcceptanceLevel: PartnerSupported,
Exporting the Imageprofile to ‘C:\\ESXi-5.5.0-20151204001-standard.iso’. Please be patient …
После того, как создастся образ (в моем случае он именуется , как: ESXi-5.5.0-20151204001-standard.iso) копирую его также через «Общие папки» в основную систему Ubuntu 12.04.5 Desktop amd64 и по аналогии как делал по заметке делаю загрузочную флешку.
aollo@system:~$ sudo parted /dev/sdb
(parted) mklabel gpt
Предупреждение: Существующая метка диска на /dev/sdb будет уничтожена и все
данные на этом диске будут утеряны. Вы хотите продолжить?
Да/Yes/Нет/No? Yes
(parted) unit GB
(parted) mkpart primary 0.00Gb 4.00Gb
(parted) print
Модель: JetFlash Transcend 4GB (scsi)
Диск /dev/sdb: 3911MB
Размер сектора (логич./физич.): 512B/512B
Таблица разделов: gpt
Номер Начало Конец Размер Файловая система Имя Флаги
1 1049kB 3910MB 3909MB fat16 primary
(parted) quit
Информация: Не забудьте обновить /etc/fstab.
aollo@system:~$ sudo mkfs.msdos /dev/sdb1
mkfs.msdos 3.0.12 (29 Oct 2011)
aollo@system:~$ sudo mkdir /media/cdrom
aollo@system:~$ sudo mount /dev/sdb1 /media/cdrom
aollo@system:~$ /usr/bin/unetbootin method=diskimage isofile="/home/aollo/ISO/ESXi-5.5.0-20151204001-standard_Adaptec_6805E.iso" installtype=USB targetdrive=/dev/sdb1 autoinstall=yes
aollo@system:~$ exitstatus:success
aollo@system:~$ sudo umount /dev/sdb1
или же просто без какой либо автоматизации:
aollo@system:~$ unetbootin
Теперь подключаю данную флешку в сервер SuperMicro и произвожу установку гипервизора ESXi на неё же, но вот беда — не идет установка, только появляется надпись:
Mission operation system
Опытным путем выяснил, что файловая система на флешке должна быть: — FAT32
aollo@system:~$ sudo fdisk -l
Диск /dev/sdb: 3911 МБ, 3911188480 байт
39 головок, 38 секторов/треков, 5154 цилиндров, всего 7639040 секторов
Units = секторы of 1 * 512 = 512 bytes
Размер сектора (логического/физического): 512 байт / 512 байт
I/O size (minimum/optimal): 512 bytes / 512 bytes
Идентификатор диска: 0xf77e43f8
Устр-во Загр Начало Конец Блоки Id Система
/dev/sdb1 1432 7639039 3818804 b W95 FAT32
aollo@system:~$ sudo mount /dev/sdb1 /media/cdrom
aollo@system:~$ unetbootin
образ успешно записался и подключив флешку к серверу произвожу установку, но опять результат тот же гипервизор не обнаруживает драйверов на RAID контроллер.
aollo@system:~$ cd adaptec/vsphere_esxi_5.5/
aollo@system:~/adaptec/vsphere_esxi_5.5$ mv vmware-esxi-drivers-scsi-aacraid-550.5.2.1.40301.-1.5.5.1331820.x86_64.vib adaptec.vib
aollo@system:~/adaptec/vsphere_esxi_5.5$ scp adaptec.vib root@10.7.8.153:/
adaptec.vdi 100% 61KB 60.6KB/s 00:00
Подключаюсь к серверу ESXi с помощью клиента SSH и устанавливаю драйвер, а затем для принятия изменений перезагружаю сервер
aollo@system:~$ ssh -l root 10.7.8.153
The time and date of this login have been sent to the system logs.
VMware offers supported, powerful system administration tools. Please
The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: Adaptec_Inc_bootbank_scsi-aacraid_5.5.5.2.1.40301-1OEM.550.0.0.1331820
VIBs Removed: VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.550.0.0.1331820
Connection to 10.7.8.153 closed.
После того, как сервер перезагрузится подключаюсь к нему с рабочей станции под управлением Windows 7 посредством клиент vSphere Client, проверяю наличие RAID контроллера и он присутствует:
ESXi (10.7.8.153) — Configure — Storage Adapters
Отлично гипервизор видит RAID контроллер, но вот теперь моя задача сводится к форматированию устройств, т. е. Я хочу переделать текущее (уже больше не нужно).
Configuration — Storage — выделяю хранилище, в моем случае оно именуется, как Data и нажимаю в правом углу на Delete, затем подтверждаю свое намерение
Но меня поджидает облом — видители гипервизор использует данные хранилища и удалить не дает:
*** The fdisk command is deprecated: fdisk does not handle GPT partitions. Please use partedUtil
Disk /dev/disks/mpx.vmhba32:C0:T0:L0: 7639040 sectors, 7460K
Disk /dev/disks/mpx.vmhba2:C0:T1:L0: 3900682240 sectors, 3719M
Disk /dev/disks/mpx.vmhba2:C0:T0:L0: 975155200 sectors, 929M
mpx.vmhba2:C0:T0:L0:1 /vmfs/devices/disks/mpx.vmhba2:C0:T0:L0:1 51fa6d03-0a61ea94-5b19-6805ca0a091f 0 System
Хранилище System я отключил
А вот отключить хранилище Data почему-то не могу:
watchdog-storageRM: Terminating watchdog process with PID 34001
ESXi-хост (10.7.8.153) — Configuration — Storage — нажимаю Rescan All
Не помогло, а вот еще проверяя расширенные параметры ESXi хоста наткнулся на вот такой вот параметр и
ESXi-хост — Configuration — Advanced Settings —
По найденному описанию на официальном сайте данное значение нужно изменить на /tmp сохранить внесенные изменения и перезагрузить хост
Connection to 10.7.8.153 closed.
После проверяю значение измененного параметра:
ничего не поменялось.
Пробую поменять через консоль:
Затем перевожу хост в режим обслуживания:
vSphere Client — ESXi-(хост) — через правый клик по хосту нахожу параметр Enter Maintenance Mode, подтверждаю свое намерение нажатием кнопки Yes
Опять перезагружаю ESXi— хост :
Connection to 10.7.8.153 closed.
После уже снова подключаюсь через vSphere Client и что самое главное успешно на всех хранилищах, таких как Data & System делаю umount и удаляю.
И только потом можно будет через vSphere Client удалить хранилища имеющее место быть в ранее созданном и используемое на предыдущей версии гипервизора ESXi текущего сервера.. Почему переделываю, дело в том, что той флешки на которой был установлен гипервизор уже много и много лет, да и сервак стал больше не нужным, а разобрать все в шагах по установке надо на всякий случай будет полезно.
Теперь заново создаю хранилища:
vSphere Client — ESXi-хост — Configuration — Storage — Add Storage… — выбираю Disk/Lun — выбираю диск размером в 500Gb, после тип файловой системы VMFS-5 — именую, как datastore1, использовать все доступное пространство. По такому же принципу создаю хранилище с именем datastore2 диска на 2Tb.
На этом моя заметка завершена. Я разобрал как создать свой образ, если не получилось как импортировать модуль в существующий гипервизор и как пересоздать локальные хранилища в основе которых лежат диски на Raid контроллере. До новых встреч, с уважением автор блога — ekzorchik.
Имеем простенький сервер на supermicro и пачку дисков sata (не фонтан, но что есть). Хочу сделать на его базе hyper-v сервер. Машин виртуальных предпологается не более 4-5 (без особых нагрузок). Raid контролер встроенный в мамку (ICH10), поэтому предполагаю использование storages space. Варианты:
1) система на отдельном диске, 5 дисков в raid 5 (один как "горячая замена)
2) система на отдельном диске, 2 зеркальных raid по 2 диска (машины по ним раскидать)
3) система на отдельном диске, raid 10 (через мамку).
Как я понимаю raid 10 в storage spaces реализовать нельзя.
Посоветуйте что будет оптимальным? (PS server 2012 и 2012r2)
Ответы
То что отдельный raid контролер предпочтительнее - это я знаю. Но в данном случае он встроенный в мат плату, так что он немного преимуществ имеет. На счет SSD - планируется в ближайшее время, собственно это один из аргументов за storage spaces, дабы потом raid не переделывать.
Тогда RAID 1 под систему. Остальные под Storage Spaces можно . Для Hyper-V - mirror (2-way в Вашем случае, для 3- way > 5 дисков надо) более оптимально, нежели parity.
Все ответы
всего 6 дисков получается?
Raid 1 под систему,как правило. далее делайте raid на базе контроллера (raid 10 + 1 Hot Spare).
Storage Spaces vs HW RAID . HW Raid (на отдельном контроллере) всегда предпочтительнее. imo
Но, стоит заметить, в 2012 R2 SP стал привлекательнее за счет Storage tiers. Можно рассмотреть вариант, если имеются SSD.
То что отдельный raid контролер предпочтительнее - это я знаю. Но в данном случае он встроенный в мат плату, так что он немного преимуществ имеет. На счет SSD - планируется в ближайшее время, собственно это один из аргументов за storage spaces, дабы потом raid не переделывать.
То что отдельный raid контролер предпочтительнее - это я знаю. Но в данном случае он встроенный в мат плату, так что он немного преимуществ имеет. На счет SSD - планируется в ближайшее время, собственно это один из аргументов за storage spaces, дабы потом raid не переделывать.
Тогда RAID 1 под систему. Остальные под Storage Spaces можно . Для Hyper-V - mirror (2-way в Вашем случае, для 3- way > 5 дисков надо) более оптимально, нежели parity.
А что даст 3-way? дополнительную отказоустойчивость?
Да дисков 6.
А что даст 3-way? дополнительную отказоустойчивость?
Да. 2-way хранит 2 копии данных. 3-way = 3 копии = +1 диск к отказоустойчивости, по сравнению с 2-way
А за счет того, что системе придется делать параллельно лишнюю копию файлов производительность не просядет? К тому что раз storage spaces, то могу найти лишний диск такой же емкости, но иной модели
А за счет того, что системе придется делать параллельно лишнюю копию файлов производительность не просядет? К тому что раз storage spaces, то могу найти лишний диск такой же емкости, но иной модели
Конечно, в теории "просядет". Но будем реалистами. Хотите скорость - simple (~raid 0) = нет отказоустойчивости. Хотите быть чуть спокойнее за конфигурацию - mirror/parity. 2-way/3-way (~raid 1)
/single parity(~raid 5)/dual parity(~raid 6). Raid 5/6, как правило, под бекапы, архивы и т.д. Остальные типы - под продакшен. Это моё видение. Ситуации разные бывают. Может Вы не хотите жертвовать местом в случае с mirror и предпочтете другие варианты, к примеру.
I have installed Hyper-V Server 2019 and can install a VM to it, so that isn't the issue.
I have an SSD in the physical box (120Gb) on which I have installed the Hyper-V core OS.
I also have 3 x 1TB HDD's in there that I want to utilise. I want to 2x HDD to be mirrored for file storage, and the last as a local backup disk (I was going to RAID5 all three, but changed my mind).
If I install a VM to the SSD, how can I get it to see the other drives? As they are local to the Core OS, the VM isn't seeing them.
They are configured, initialised, partitioned etc. This was done using DISKPART.
At a guess I think I need to make some directories on the drives, and set the permissions? But how, when Hyper-V Core doesn't have a GUI? Is it all done through PowerShell?
Or should the VM see the host's drive regardless, and it's something else that is the issue?
Cyber-as-a-Service
2022-05-12 14:00:00 UTC Webinar Webinar:Knowbe4 Cyber-as-a-Service: Its Evolution &What You Can Do to Fight Back Event Details View all events
14 Replies
kevinmhsieh
- check 215 Best Answers
- thumb_up 1008 Helpful Votes
Not sure how you plan to do backups, but the normal thing to do with the RAID is configure it, and bring it online a a local volume on Hyper-V with NTFS, and then create a second VHDX for your VM and store it on the drive.
I use diskpart.exe to provision storage on Hyper-V Server.
adrian_ych
I think OP may need to know the diff between Server 20xx with hyper-v vs Hyper-v server (or ESXi etc). Even when saying "hyper-v core" is it a server 2016/2019 "core" with hyper-v ? This would be clear as in Hyper-V server or ESXi, there is "no OSE" to manage the local drives and only to use the local storage as data stores for VMs.
Then VMs should not be able to see the storage on the host unless it is presented as a vHDD (VHDX or VMDK) to the VM.
So if you are using server 2016/19 with hyper-v, it is a good practice to use a non OS drive for the VMs data store and if possible to keep the VM in the same data store without further creating folders and/or partitions in the data store (let VM manager handle it).
Diskpart, I'm OK with, but the rest..
I tried connecting the Hyper-V Server as you say, but it wouldn't connect, no permission.
As for backups, I'm OK with that. Habitually, I tend to have a local drive for convenience, and also have backups to a different appliance such as a NAS or external drive. I don't keep my eggs all in one basket.
adrian_ych wrote:
I think OP may need to know the diff between Server 20xx with hyper-v vs Hyper-v server (or ESXi etc). Even when saying "hyper-v core" is it a server 2016/2019 "core" with hyper-v ? This would be clear as in Hyper-V server or ESXi, there is "no OSE" to manage the local drives and only to use the local storage as data stores for VMs.
Then VMs should not be able to see the storage on the host unless it is presented as a vHDD (VHDX or VMDK) to the VM.
So if you are using server 2016/19 with hyper-v, it is a good practice to use a non OS drive for the VMs data store and if possible to keep the VM in the same data store without further creating folders and/or partitions in the data store (let VM manager handle it).
I know the difference but for the purpose of this discussion, I am utilising Hyper-V Server 2019 as the host, and want Server 2019 Essentials as a VM.
The bit I am struggling with, is "presenting" the drives/storage I have to the VM(s) so that I can utilise it.
What I understand is he wants to use raid disks directly for the virtual machine. To make it visible you the raid partitions, you need to go to host machine disk management and make them offline! Then you can directly attach to VMs!
Alternatively you can create vhdx but that wouldn’t be nice as the one I explained.
doommood wrote:
you can connect to Hyper-V Server from Windows 10 Pro with Hyper-V Manager installed and manage your VMs remotely.
Is it that straightforward? Because every time I've tried, I cannot connect.
wolfone wrote:
What I understand is he wants to use raid disks directly for the virtual machine. To make it visible you the raid partitions, you need to go to host machine disk management and make them offline! Then you can directly attach to VMs!
Alternatively you can create vhdx but that wouldn’t be nice as the one I explained.
The host machine (Hyper-V Server 2019) doesn't have 'disk management' as far as I am aware? It's not GUI, more command prompt or PowerShell.
Supaplex
- check 105 Best Answers
- thumb_up 648 Helpful Votes
TX2uk wrote:
But how, when Hyper-V Core doesn't have a GUI? Is it all done through PowerShell?
kevinmhsieh
- check 215 Best Answers
- thumb_up 1008 Helpful Votes
I believe that your struggle is related to the fact that the host has is not domain joined, and you don't have it configured properly to be remotely managed as part of a workgroup. Once you make some firewall changes, you can manage it from Windows 10 or another Windows 2016 or 2019 Server using the Hyper-V Manager GUI tool.
I am about to go google "manage Hyper-V workgroup".
kevinmhsieh
- check 215 Best Answers
- thumb_up 1008 Helpful Votes
Just one of many links stating how to remotely manage Hyper-V when in a workgroup.
RobC0619
In a nutshell you need to use either powershell or Hyper-V manager to create VHD/X disks or pass through disks and then attach them to the VM in question. VM's cannot see native disks by themselves.For Non domain joined Servers you have some work to do in order to use Hyper-V manager, there is also some work involved to manage a workgroup server in WAC as well. WAC will allow you to manage a core server disks, so will the old Server Manager feature in windows. But as mentioned there is some work to do in order to get these tools to talk to a non domain joined server.
RobC0619
Also I would caution against pass through disks. Hyper-V is not got at this. Best to create VHDX disks and attach them to the VM. This makes the whole VM transportable to another HOST if that ever becomes necessary.
adrian_ych
adrian_ych wrote:
I think OP may need to know the diff between Server 20xx with hyper-v vs Hyper-v server (or ESXi etc). Even when saying "hyper-v core" is it a server 2016/2019 "core" with hyper-v ? This would be clear as in Hyper-V server or ESXi, there is "no OSE" to manage the local drives and only to use the local storage as data stores for VMs.
Then VMs should not be able to see the storage on the host unless it is presented as a vHDD (VHDX or VMDK) to the VM.
So if you are using server 2016/19 with hyper-v, it is a good practice to use a non OS drive for the VMs data store and if possible to keep the VM in the same data store without further creating folders and/or partitions in the data store (let VM manager handle it).
I know the difference but for the purpose of this discussion, I am utilising Hyper-V Server 2019 as the host, and want Server 2019 Essentials as a VM.
The bit I am struggling with, is "presenting" the drives/storage I have to the VM(s) so that I can utilise it.
The main problem when using server 2016/19 with hyper-v as a host (instead of hyper-v or ESXi) is that sometimes we still think that the host is a server or SAN where the VMs reside and can still use the the other host storage as some sort of VM's vHDD.
The workaround is that you still can do the above as a mapped drive to a shared folder on the host.
As mentioned by someone above that VMware ESXi can use things like RDM to map SAN storage to a VM (something like MS iSCSI initiator on physical servers), but it is highly not recommended on VMs running on server 2016/19 with hyper-v hosts. I am not sure this can be done on VMs on Hyper-v server as well.
The proper method is to create a vHDD (VMDK or VHDX) and then present it to the VM. You should imagine for this case, the VM is like a physical server where you cannot buy HDD and "plug" into 2 physical servers together, unless it is a NAS or SAN using other protocols (FC, iSCSI etc).
Question for the HyperV experts, I'm trying to assist on another thread and I know that VMWare's ESXi cannot handle what is needed. I have a feeling that HyperV can, indeed, handle software RAID (Windows Software RAID, that is) but I haven't done this nor have I seen or heard of anyone doing this so I was hoping that someone could answer the question before I recommend someone go down this path.
Cyber-as-a-Service
2022-05-12 14:00:00 UTC Webinar Webinar:Knowbe4 Cyber-as-a-Service: Its Evolution &What You Can Do to Fight Back Event Details View all events
Alex3031
Good question. I am 99% sure it has no problem because windows handles the IO for guests.
23 Replies
Alex3031
Good question. I am 99% sure it has no problem because windows handles the IO for guests.
Alex3031
I mean the control instance of windows running on the hypervisor controls IO
From the options in the disk manager snapin it looks like it can, the configurations options are present, but even my test systems are on hardware raid so I can't do a test for you.
Martin9700
Raid in this use is a function of the OS so hyper-v will not be aware that it's on a Raid array just that it has a HD with X GB's of space. You should be fine.
That being said, software based RAID is not recommended due to performance issues unless you are talking about server 2012. Then with Server 2012 it's a whole new game.
StorageNinja
Samuel Brooks wrote:
Raid in this use is a function of the OS so hyper-v will not be aware that it's on a Raid array just that it has a HD with X GB's of space. You should be fine.
That being said, software based RAID is not recommended due to performance issues unless you are talking about server 2012. Then with Server 2012 it's a whole new game.
I am fairly certain you can't use the spaces pool's to host Hyper-V VM's.
John773 wrote:
I am fairly certain you can't use the spaces pool's to host Hyper-V VM's.
I'd imagine you can use Storage Spaces/Storage Pools to implement a sort of software RAID and achieve the desired result.
mrsleep
I second that, software raid gives me nightmares.
It's this or FakeRAID or Windows on Software RAID without virtualization. It's a no-win situation.
Scott Alan Miller wrote:
It's this or FakeRAID or Windows on Software RAID without virtualization. It's a no-win situation.
With Server 2012, I think Microsoft wants to sidestep software RAID and instead is emphasizing storage virtualization. I'd feel much more comfortable with virtualizing the storage than I would with using a "traditional" software RAID (through Disk Management).
Are you looking to provide software RAID for the Hyper-V host or just the guest VMs?
jrondo4 wrote:
Are you looking to provide software RAID for the Hyper-V host or just the guest VMs?
Guests for sure, host doesn't matter so much. Could run from USB if necessary.
What type of redundancy is needed for this scenario? How many disks will be available to pool? Is Server 2012 on the table as an option? I hate to ask so many questions, but I feel like I'm throwing darts blindfolded and could maybe use a peek at the target I'm trying to hit.
Ahhh. I'm assuming the VMware vSphere infrastructure is already in place, so I'm not sure *exactly* where Hyper-V might fit into the OP's environment. Were you thinking about the stripped-down version of Hyper-V or a standard server with the Hyper-V role installed?
I'm also not sure why the OP wants to use the NAS for file *and* VM storage (other than to avoid the need to license another instance of Windows Server). Personally, I'd just fire up a Windows server VM to share files on the network.
Assuming the vSphere infrastructure is not yet in place and Server 2012/Hyper-V 2012 is on the table as an option, there are a lot of different ways one might reach the end goal. Hyper-V replica, "Shared Nothing" live migration, storage virtualization, the option to run VMs from an SMB3 share.
Now I'm wondering if I linked the wrong thread.
This one instead. Sorry about that.
I know software raid works for Hyper-v (2008R2). I had it setup in a lab environment
wthfit wrote:
I know software raid works for Hyper-v (2008R2). I had it setup in a lab environment
Thanks. Anyone know if you can do RAID 10? Looks like you can't.
Not sure about the software Raid 10. I googled it, but could not find a definitive answer
With Server 2008 R2 and earlier, I believe you can only do a mirror set with dynamic disks. With Server 2012, you can't really do RAID 10 either.
It's not RAID 1, but it's not RAID 10 either. It's somewhere in between. If I wanted to do something like a RAID 10 on Windows, I'd probably go that route with Server 2012.
EDIT: Also of note here is that Storage Spaces cannot provide redundancy for the operating system. That would have to be accomplished in another way. If the server ends up as a Hyper-V host, that may not matter, especially if the OS/hypervisor gets installed to a USB flash drive.
Читайте также: