Ticket #124 (new enhancement)

Opened 13 years ago

Last modified 13 years ago

Remote Package Management

Reported by: ciaranm Owned by: ciaranm
Priority: ProjectIdeas Milestone:
Component: core/paludis Version:
Keywords: Cc:
Blocked By: Blocking:
Distribution:

Description (last modified by ciaranm) (diff)

This is a fairly vague project description. We fully admit that we don't know exactly what the requirements or even the use cases are. We don't even know whether this one is doable.

Having said that:

Users want a better way of managing multiple homogeneous and non-identical systems. We think that this can be achieved by some combination of the following:

  • Multiple configuration sets. We support this already via config-suffix / environment, but it likely needs extending to better fit the needs of users.
  • Remote package installing. This will probably be done via improving the handling of binary packages.
  • Ability to carry out a task (install, uninstall, query) across multiple machines.

This may require a new client, or it may require extensions to existing clients.

Proposals for this should start with use cases. The design is probably trickier than the code. You should discuss your design with us before submitting any formal proposals or touching any code.

Change History

comment:1 Changed 13 years ago by asliebe

That idea is exactly what I'm looking for! Currently I've to manage all machines by logging in and do my maintenance there localy.

But they all share (almost) the same repositories by nfs. Some are synced on my server because they are used on all machines, some are synced by one of my machines because I won't have them on my server as install base. So, to start there it would be a nice idea to support some kind of server-repo managing. I.e. have a repo-sync config file which paludis(--reposync) uses and syncs all repos configured in there and drop localy the sync,write_cache,names_cache keyword from the repo-configs.

That way I could have one machine sync all repos and share them but only use the ones (on the clients/servers that are) configured in repositories/*.conf as install base. That also opens the posibility to have some kind of local repo-mirror that all client machines could use to sync their repos if they're some kind of mobile and don't have access all the time to a shared storage.

Btw what have you in mind for the remote install thing?

comment:2 Changed 13 years ago by ciaranm

  • Description modified (diff)

For remote install, ideally one would be able to install a package on a remote system without that system requiring a tree or a full Paludis install. Perhaps a simple command line app that can be started over ssh that implements a small set of commands like 'create this directory with these permissions', 'create a file with this name, this content and these permissions', 'delete this file', 'does this file exist?' and so on. It might even be doable as a bash script. It would also have to have some way of handling preinst / postinst functions.

This isn't necessarily the only or best way of doing things.

comment:3 Changed 13 years ago by asliebe

That's what I've made up in my mind - wrap up things in ruby with an xmlrpc:

server is running

  • repo mirror
    • all repos are hosted on the server and shared by nfs/cifs/... that way no client needs to be synced everytime (but can be)
    • all repos are properly cached by a mirror-syncer-script (based on paludis libs)
  • a ruby(onrails) xmlrpc server where clients get their configs by fetching an url, for example MAC based

clients are running

  • also a kind of xmlrpc i.e. ssh-based: when logged in, the xmlrpc is the "shell". still have to figure out the (install-)permissions handling

That also means that every machine needs a working paludis install und their rpc-scripts in place to work because in my scenario all machines build their packages local. Remember every machine can have different libary versions installed so binary distribution makes no sense in any other environment than with identical machines. To take the load off the clients you can run a distcc server with ccache.

comment:4 Changed 13 years ago by ciaranm

Done correctly, the tree should only need to be on the server. Clients shouldn't need access to it at all.

A Ruby / XML based solution is much too heavy. On the client side, requirements should be as low as sanely possible. Probably just sshd and either bash + coreutils or a simple C++ program should suffice.

comment:5 Changed 13 years ago by asliebe

sorry for the long delay (we have semester break here)

I haven't done much c/c++ coding at all and I wasn't able to abstract the idea of building the packages for the clients on an other machine. Though, it makes sense to distibute only binary packages to the clients.

I mean it's not that I haven't dristibuted some qpkg'ed binaries ony my systems myself but I can't figure out how you'll manage the different repos,dependencies and libraries for different clients on the server. (So I'm probably out of this)

The client side looks to me like an ordinary root shell access. You said that eventually even paludis isn't needed, how do you plan to register the new stuff in pkg-db (/var/db/pkg/)?

Btw, there is also the question how to manage post setup tasks (eg. etc-update, ...)

All in all, this lightweight client - server model looks great for any stationary network but left out any mobile clients which sometimes aren't reachable by the server, so they can't install any new software at that time or take look at what software is available for this client? (just a thought for my scenario)

Note: See TracTickets for help on using tickets.