Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Jens Neuhalfen
@neuhalje
Sorrx, was AFK
BTW: I wrote a dependency injection implementation for one of my projects
It works pretty well (and is quite smallish)
Nicolas Sebrecht
@nicolas33
There are more than one way to do dependency injection. ,-)
Jens Neuhalfen
@neuhalje
Jupp
Nicolas Sebrecht
@nicolas33
How did you implement it?
Jens Neuhalfen
@neuhalje
Wait a sec
Nicolas Sebrecht
@nicolas33
@ishankhare07 and @aroig You might be interested in the above. ;-)
(about the imapfw design)
(restarted from scratch, BTW)
Jens Neuhalfen
@neuhalje
So this is how you declare factories and dependencies
    #
    #  storage
    #

    @di.provides(scope="PER_ROOT")
    def transaction():
        return Transaction()

    @di.provides(scope="GLOBAL")
    def storage__sqlite_instance(configuration):
        return SQLiteInstance.create_or_open(configuration.storage_root_path_sqlite)

    @di.provides(scope="PER_ROOT")
    def storage__DB(storage__sqlite_instance, transaction):
        return SQliteTX(storage__sqlite_instance, transaction)
Nicolas Sebrecht
@nicolas33
decorators and params.
Jens Neuhalfen
@neuhalje

The idea is that you normally operate in some kind of business transaction or context.

At the beginning of such a transaction (say the request handler for a REST call) you would create a new business transaction (I named it query root):

query_root  = di.new_query_root()

db = query_root.get_dependency(“storage__DB”)
Nicolas Sebrecht
@nicolas33
Easy when all in one thread. :-)
Jens Neuhalfen
@neuhalje
Instances get reused as defined: A factory method that is declared with @di.provides(scope="PER_ROOT”), e.g. the transaction() will return the same transaction instance for the query root
It should work multithreaded
Nicolas Sebrecht
@nicolas33
But not multiprocessed, I guess.
Threads suck because of sharing memory state.
I don't want this anymore.
Jens Neuhalfen
@neuhalje
That depends on the requirements: Sharing object instances across different processes: Nope
Singleton invariant across processes: Nope
Multiprocess makes many things easier by making a few things way harder :-)
Nicolas Sebrecht
@nicolas33
I do want message by passing to allow multiprocessing and avoid most of locks or sync state issues.
Thanks, I'll look at that.
Jens Neuhalfen
@neuhalje
Message passing seems like a good idea
How would you synchronize resource access?
Nicolas Sebrecht
@nicolas33
There no need to sync resources.
Each worker will loop and noop if no request.
Jens Neuhalfen
@neuhalje

With message reordering I would assume that deadlocks could be tricky.

Ah, ok. But how about e.g. manipulating the IMAP/Maildir?

… the cats demand their food :-) … afk …
Nicolas Sebrecht
@nicolas33
Do you have an example?
IMAP worker started
Maildir worker started
Engine started
State worker started
engine requests news from state to current state to maildir
engine requests news from state to current state to imap
engine receives the updates
engine applies the sync logic
engine send the updates to either side (imap and/or maildir)
engine send the applied updates to the state
No sync required (except if we consider the "updates received from the repositories").
Nicolas Sebrecht
@nicolas33
No need for message reordering.
"state" is the repository. It's the common ancestor. How the previous sync finished.
(syncing is a 3-way merge)
Jens Neuhalfen
@neuhalje
@nicolas33 is there a possibility that e.g. two processes manipulate the same folder?
Nicolas Sebrecht
@nicolas33
Sure.
I'd say this is the reponsability of the app owner.
Nicolas Sebrecht
@nicolas33
Oh, there's another concurrency issue you're pointing out.
I don't see how a repository could be requested non-compatible commands.