Announcement

Collapse
No announcement yet.

Proxmox VE as High Availability solution for HomeSeer

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #46
    Originally posted by lurendrejer View Post
    I don't remember if replication works with lvm (faster than ext) - I see some technical difficulty but the pmox people aren't stupid.



    I'd go for lvm and use virt-io virtual NICs and scsi controllers if possible.
    It will not work and no plans to support. See:
    https://forum.proxmox.com/threads/co...storage.84877/


    ---
    John

    Comment


      #47
      Originally posted by John245 View Post

      Mo idea if that is possible? What will be the advatage?

      ---
      John
      Everything but compatibility - speed, size, etc.

      Comment


        #48
        Originally posted by lurendrejer View Post

        Everything but compatibility - speed, size, etc.
        Disadvantage will be security.

        ---
        John

        Comment


          #49
          Originally posted by lurendrejer View Post
          I might just end up buying this: Supermicro SuperServer 5018D-FN4T
          Why this version and not a server build using this board? https://www.supermicro.com/en/produc...1SDV-8C+-TLN2F

          ---
          John

          Comment


            #50
            Originally posted by John245 View Post

            Why this version and not a server build using this board? https://www.supermicro.com/en/produc...1SDV-8C+-TLN2F

            ---
            John
            Also a great idea.
            I like to keep things in my rack - the purpose of true servers would be ipmi and some sort of standardisation.

            Comment


              #51
              Originally posted by lurendrejer View Post

              Also a great idea.
              I like to keep things in my rack - the purpose of true servers would be ipmi and some sort of standardisation.
              I think complete it will cost around Euro 2500,-

              I also like to keep things in my rack. Still investigating all the options.

              ---

              John

              Comment


                #52
                Originally posted by lurendrejer View Post

                Also a great idea.
                I like to keep things in my rack - the purpose of true servers would be ipmi and some sort of standardisation.
                Which one would you prefer?
                1. X10SDV-TLN4F
                2. X10SDV-8C-TLN4F
                And why?

                ---
                John


                Comment


                  #53
                  I actually forgot I had a small cluster running in an 'odd' location.
                  As far as i can see - you don't need extra disks for ZFS - the hosts here have a single (very small) nvme-drive. It is only used for 'compute' hence, it has 2288G Xeons.

                  Attached Files

                  Comment


                    #54
                    Originally posted by John245 View Post

                    Which one would you prefer?
                    1. X10SDV-TLN4F
                    2. X10SDV-8C-TLN4F
                    And why?

                    ---
                    John

                    The first one - because of the IPMI.
                    I only had a very brief look.

                    Comment


                      #55
                      They both seem to have a dedicated IPMI-NIC - I guess they are identical besides active/passive cooling.
                      I'd go for the one that fits whatever cooling you have in the case.
                      Rack-cases often don't need fans on their headsinks - since they have VERY overactive cooling built into the cases.

                      Comment


                        #56
                        Originally posted by lurendrejer View Post
                        I actually forgot I had a small cluster running in an 'odd' location.
                        As far as i can see - you don't need extra disks for ZFS - the hosts here have a single (very small) nvme-drive. It is only used for 'compute' hence, it has 2288G Xeons.
                        You did partitioning of your NVMe?

                        —-
                        John


                        Verzonden vanaf mijn iPhone met Tapatalk

                        Comment


                          #57
                          Originally posted by John245 View Post

                          Disadvantage will be security.

                          ---
                          John
                          Why? I'm not an expert on LXC but I would think that the container is running isolated from the hostOS.

                          Comment


                            #58
                            Originally posted by John245 View Post

                            You did partitioning of your NVMe?

                            —-
                            John


                            Verzonden vanaf mijn iPhone met Tapatalk
                            Looks that way - I build so much **** that I can't even remember what it did yesterday :'D
                            Attached Files

                            Comment


                              #59
                              Oh.. looking at the screenshots, it does have two disks - sorry!
                              And they are SATA disks - I wonder why I ended up there.

                              Comment


                                #60
                                Originally posted by lurendrejer View Post

                                Looks that way - I build so much **** that I can't even remember what it did yesterday :'D
                                It is an option. But mirroring to the same SSD does not make much sense in respect to remove points of failure.

                                Although an option will be 1 OS SSD and one portioned fort the ZFS

                                —-
                                John

                                Comment

                                Working...
                                X