[EXPERIMENTAL] Promoting container interoperability through standard definitions
ContainerInterface
/PSR-11 basically, and service-provider (which is the WIP for interoperable module configuration). The rest is "deprecated". @all is it fine with you if I add "deprecated" warnings to all the other packages? (in the README)
ContainerInterface
/PSR-11 ? It seems to me how people view that interface has changed a lot in the last year, many people like it now. If the poll shows a major support maybe we shouldn't delay PSR-11 longer?
container-interop/container-interop
is so widely used is showing that
ContainerInterface
is widely usedFrom his answer:
The
ParentAwareContainerInterface
is a formal way of saying "this container can have a parent". Nothing more, nothing less.
I've just posted a very long comment describing a simpler pattern enabling us to share entries between containers:
https://github.com/container-interop/container-interop/issues/55#issuecomment-285939658
As I've argued in the past, standardizing definitions potentially removes the reason to choose one container implementation over another, which is bad - typically what makes the container implementation choice relevant in the first place, is what sort of features are offered in terms of registering and configuring the entries.
It's partially a matter of semantics, role-definitions and role-relationships, more so than precisely what these interface signatures look like - rather than trying to standardize how we define entries, I think we'd be better off defining a means of exchanging entries between containers. So that, at the end of the day, we don't have to write the definitions of a module against a "lowest common denominator" of containers/builders/factories, but instead can leverage proprietary features of the chosen container-implementation for each module.
It's a slightly different direction/philosophy, but it's much simpler and (as described in that very long comment) may support a broader range of scenarios than just sharing definitions, so please do think about it.
As I've argued in the past, standardizing definitions potentially removes the reason to choose one container implementation over another, which is bad
@mindplay-dk please see If everything is standardized is there a point to having many container implementations anymore? => does it answer this point?
@mnapoli not really - in fact, I find the whole idea is sort of conflicting with itself.
You say that:
it is meant to be used by developers writing modules.
But then proceed to say:
End users (i.e. developers) can still choose their favorite containers and make use of all their specific features.
Calling developers "end users" in one scenario is just mingling words - you even have "i.e. developers" in parens, so the argument is circular: developers are free to use their favorite containers to make use of their specific features, but should use these interfaces.
You seem to view the creation of a module from something very different from the creation on an app/project? To me, all of that constitutes "coding", and I want complete freedom to always choose the appropriate container for my project. I don't want to have to switch to a different API because I'm writing a module.
I don't believe we should be standardizing this, at all - I think we should create a much simpler standard that enables us to import/export the actual entries, but allows any module or project vendor to work with the container of their choice.
I think we can achieve effectively the same thing with much less complexity.
Perhaps this is too difficult or abstract to discuss in writing. I think my next step may be to create a repository with these much simpler interfaces, then fork a couple of different container implementations and implement them, so we have something more concrete to discuss.
I believe features like aliasing, overrides, extension, etc. should be proprietary features - we gain nothing from standardizing on these, it only serves to remove or abstract away the interesting differences between containers.
We shouldn't need to choose between a "standard" vs "proprietary" way of defining entries, not for projects, nor for modules. What matters is being able to share entries - whether these entries were defined using aliasing, overrides, extension, or any number of other concepts, these are implementation details, and I don't believe there's any reason they shouldn't remain proprietary to each implementation.
Bottom line, if you can import/export entries between containers, why/how/when does it matter how those entries were defined? I don't see how it does.
The only maybe argument that I can see, is you can avoid having simultaneous instances of different implementations of containers. But that's an argument for a lot of complexity in favor of very marginal micro-performance gains. If I have two or three (or even five or ten,if you want to get extreme) different container implementations in a project, those are very likely still creating a very, very minimal performance overhead; containers tend to be the cheapest components in the stack.
I think we would do better taking the simplest, shortest possible path to exchanging entries - rather than trying to standardize how entries are even generated in the first place; each container has it's own opinions and features for that, and it's a far more complex problem, which I believe should be implementation details.
entries should also be overridable and extendable between modules (so there should be a way to achieve that)
As argued in the long thread mentioned above, I regard these as implementation details - whether a source or target container happens to implement aliasing, overrides, extension mechanisms, or any other means and details of how the entries are generated, this is regarded by what I'm proposing as implementation details beyond the scope of what's required for the simple exchange of entries.
That is, if my container supports any of those features, and I import entries from your container, I'm all set - I would simply import what your container provides, then use my own container to alias, override or extend any of the registrations you provided.
The container part, especially with full stack frameworks like ZF, Laravel or Symfony (which have a lot of modules/bundles), can take a non-negligible amount of time if it's not optimized
Two thoughts: (1) combining two full-stack frameworks is most likely not very desirable for various practical reasons, but (2) the comparative overhead, in any case, consists merely of loading more classes; the bootstrapping of the entries needs to happen either way, regardless of which container they're being done to, so (as I think you almost implied) that needs to be optimized by each container anyway.
The practical performance difference between what you're suggesting and what I'm suggesting is likely nil, since, what I'm proposing, is the provider loops over it's internal entries and exports them with a method-call; as opposed to, what you're proposing, the provider returns an array, and then the receiving container loops over the entries and imports them, which it will most likely do with an internal method-call. If there's any performance difference, it'll be marginal. If your application makes even a single PK SELECT, the overhead is likely already negligible. (I'm not arguing against benchmarks, I'm just saying, micro-performance concerns should not be allowed to drive up complexity for no other reason.)
Anyways, I have more stuff in writing now - will push to a temporary repo for further discussion.
@mnapoli @moufmouf here's a draft description and interfaces:
https://github.com/mindplay-dk/provider-interop
I will implement this in unbox, likely later this week, and then fork some existing projects and other containers and try to implement it as well. We'll see how this pans out in practice.
@mnapoli I think the watershed difference between what you're proposing and what I'm proposing, is that:
So, in a sense, I'm doing the opposite of what you're proposing.
What our team has been doing so far is actually more akin to what you're proposing, but we have a lot of modules, and having everything registered in a single container starts to get out of hand - by nature of those approach, all of the internal dependencies of each module are available to other modules, and ultimately to the project, which I'm starting realize, is not desirable.
I think what you accomplish with your approach is the opposite of isolation, which seems like a good thing on the surface, because it appears to maximize flexibility - but there are often internal components in a module that really should not be exposed, aliased, overridden or extended, the consumer should not know about them at all.
Classes such as factories and repositories may well be implementation details subject to change - typically the commitment is to a public service API, and we want the freedom to refactor things like factories and repositories internally in the module when this doesn't affect it's public service interfaces.
Anyhow, this is all theory, I will try to back it up with some real-world examples and implementations in the coming weeks, and we'll see where this leads :-)
Hey @mindplay-dk,
I'm just catching up. Interesting! I understand your desire to share entries between containers (more flexible!), and it kind of looks like what we were trying to do with container-interop's delegate dependency lookup feature (although your idea is way more structured).
That being said, I have a number of questions that come to my mind.
1- In your idea, a service provider (that is actually a container), can "provide" services to a third party container. However, I don't see how your service provider can fetch services from this same third party container, since the callable resolver is passed zero parameters. One could of course argue this is a bad idea, but if so, it needs to be said.
Something like this seems hard to do with your proposal:
2- @mnapoli asks about how we can "extend" entries because it is precisely what made us look into service providers in the first place (instead of sticking with the delegate dependency lookup feature). Consider this example:
One service provider (let's call it A) is providing a Twig_Environment
Several service providers (let's call them B1, B2, B3) are "extending" the Twig_Environment
by registering twig extensions in it ($twig->register(new MyExtension);
).
With container-interop/service-provider, this is pretty easy right now. You simply pass an array of service providers and that's it. With your proposal, although it is not impossible, it seems more difficult. If I understand correctly, the service-provider A must register with service-provider B1 that must register with service-provider B2 that must itself register with service provider B3 that finally will register with the main container. We kind of have a "graph" of service providers and although this is more powerful, it is also way more difficult to use.
Finally, when thinking about service-providers, we challenged the idea against other ideas using this grid: https://github.com/container-interop/fig-standards/blob/container-configuration/proposed/container-configuration-meta.md#63-rationalizing-the-choice-between-service-providers-common-file-format-and-definition-interface
You might want to see how your idea compares to the others.
Anyway, I still find the idea interesting, and I'm waiting for your real world examples to make a more precise idea about it.
@moufmouf thanks for engaging in this conversation :-)
I don't see how your service provider can fetch services from this same third party container, since the callable resolver is passed zero parameters. One could of course argue this is a bad idea, but if so, it needs to be said.
It's not necessarily a bad idea, it depends on what you need/want... But there are two ways this could happen: (1) if the service-provider is also a service-registry, you can simply register (or inject) your component into it, or (2) for more systemic integration, if the vendor's container supports delegate look-ups, you can use that feature.
To keep things simple, I'm trying not to mandate or include features that aren't essential to the problem of exchanging entries - for example, I'm trying not to mandate a delegate look-up feature, which some containers don't have, don't want, or can't support; I'm trying not to dictate any architectural decisions.
2- @mnapoli asks about how we can "extend" entries because it is precisely what made us look into service providers in the first place
I regard features such as overriding, extending or aliasing as being outside the problem scope - although these features are supposed by many (but not all) container implementations, I regard those as container responsibilities; I don't see them as essential to the problem domain of exchanging entries.
To use your Twig example, let's say you ship a PHP-DI container with the core bootstrapping for Twig, and I want to ship a module that registers a Twig extension. Well, the Twig module you shipped uses PHP-DI, and you would have chosen that implementation for a reason, such as it's ability to extend an existing entry - to make use of these features, I will need to ship my Twig extension as a PHP-DI provider.
I don't see that as a problem, I see it as just a natural thing - the ability to extend (or override, or alias, or other container-specific features) is a container-feature, not a provider-feature, and again, it's not essential to the problem of exchanging entries.
Looking at your comparison matrix, you're comparing against a feature matrix to see how the solution satisfies those requirements - my belief is that many of those features are container-features, not provider-features; to me, provisioning is simply about the import/export (exchange) of entries between containers. If I want to subsequently alias/configure/extend something, I would have selected a container with those abilities. If any of those tasks are relevant to the provider/module I'm shipping, I will select an appropriate container.
I believe that reducing the scope to the bare essential is well in the vein of PSR-11 itself, and I do believe it's feasible. PSR-11 itself has an absolutely minimal scope: get()
and has()
- it almost sounds ridiculous, but it's massively versatile; it's power lies in it's simplicity and the imagination of developers who build around it.
I think it's important to regard these two interfaces not as a "solution", but much in the same way you would likely regard PSR-11 - as "contact points" for the actual solutions, which could come in a variety of forms.
The goal for me is interoperability of solutions, not a solution unto itself. I'm pretty sure that was the goal with PSR-11 as well.
Keep in mind that these two interfaces are not the only tools at your disposal - you also have good ol'fashioned OOP :-)
For example, maybe your Twig module ships with a proprietary interface for "Twig module extensions", which might make it even easier and more obvious to use - or maybe it encapsulates the bootstrapping of Twig extensions somehow and makes it reusable in some way.
@moufmouf On a last note, something that came up when I was discussing this with a coworker today... We're building a very large, very modular system - I saw one project today bootstrapping like 25 proprietary (unbox) providers, and these work much the way you're proposing this should work; they're all injecting their dependencies into a single container.
We're starting to realize how risky this is - it works well at a small scale, but you start to get to a larger scale with several hundred entries from dozens of providers written by various different developers. The problem with this approach at scale, is I can't possible know about everything these providers are registering - a lot of it isn't even relevant to me, and the more we add on, the greater the risk of collisions.
For example, let's say your module exposes a service, but also has a bunch of repositories it uses internally, and some of those depend on a PSR-16 cache. Let's say my module also exposes a service, has it's own repositories, some of which also depend on a PSR-16 cache. If you bootstrap your container with CacheInterface::class
and I do the same, that works well in separation - all our tests are passing etc., but when someone decides to bootstrap both into a single container, we have a potentially serious problem: suddenly there's key collisions, and clearing your cache for some reason wipes out my cache and no one is completely sure why.
Consider the alternative - if you don't export your cache-registration, and I don't export mine, we don't have a problem. We also don't have to export implementation details, like our internal repositories, which are used by our services, which is what you should be using from your project.
Within a single domain (such as your Twig example) allowing providers to bootstrap the same container/factory can be fine - you control how that domain gets bootstrapped. But when you start to mix unknown components from many different domains and different vendors, things can start to go south very quickly.
I'm afraid that, with what you're proposing, collisions and other issues will start to happen at scale. I think we can minimize that by isolating each domain in it's own container "bubble", selectively exporting public services, while keeping our internal components and bootstrapping, well, internal; that's not really possible when you're registering everything against a single container.