Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...


WorkflowBenefitsComments (Sergey)Comments (Jürgen)Comments (...)
Single instancewe start single asapo instance (scaling horizontally separate services like receiver, broker, database, etc) and share it within all beamlines. So, for example on N proxy nodes we'll have N receivers serving all requests.

We don't need to start multiple instances which simplifies administration and also in some parts simplifies resource sharing, especially what concerns memory buffers. 

ASAPO works as a single service for all users, like many other services (gpfs, ssh, dns, Kafka, ...). We don't build a separate ASAP infrastructure or even separate compute center for each beamline, why should we do this for ASAPO?

Yes, theoretically we could start multiple services but somehow it feels wrong to me. Anyway, the main showstopper is the memory buffer (and probably also network). We would need to find a complicated solution how to share resources. If we just statically start say 5 receivers for different beamlines then each becomes 1/5 of memory, network, CPU. The performance will struggle a lot. If we try to dynamically share resources - this is a lot of work and how would we e.g. reduce memory allocation for receiver from other beamline.

So, I don't really see benefits. I'd better implement resource sharing inside asapo. We can do almost everything there e.g.:

by configuring discovery service which says producers/consumers with which receivers they can communicate we can reserve nodes for specific beamlines, limit amount of tcp connections a beamline can have, ... 

Also, if a beamline "goes wild" ( which can only means it sends a lot of data/makes a lot of requests, which should not be a big deal at all) we can think of some throttling like nginx does (https://www.nginx.com/blog/rate-limiting-nginx/). Btw, nginx also runs as a single instance.





Is the memory buffer static or does it adapt to need and available space?

CPU share should be according to load if it's not wasted in busy wait loops.

Network share should also follow load unless you send many filler bytes.

Having one instance per BL removes the complexity of resource sharing from asapo (as you mention in Benefits)


Also deployment may be easier in many instances: With only one shared instance you cannot easily migrate to a newer version without affecting all beam lines, with one instance per beamline or even beamtime migration would be easier, no global maintenance window needed.


Multiple Instancewe start M asapo instances (M - number of beamlines). So, for our example on each proxy nodes we'll several receivers that belongs to multiple beamlines (but not necessary M*N receivers)

We could manage resource sharing on system level (via cgroups/Docker containers). Thus, no need to take care about it inside asapo.

Also, we could start different versions, assign nodes to beamlines via orchestration tools.

In general, it gives a feeling - this is  "asapo instance for beamline xx" and everything that happens there does not trouble other beamlines.

...