Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 05 2019 16:58

    KarlSjostrand on master

    Update README.md (compare)

  • Apr 05 2019 16:55

    KarlSjostrand on master

    Update README.md (compare)

  • Apr 05 2019 16:55

    KarlSjostrand on develop

    Update README.md (compare)

  • Feb 14 2019 21:42

    KarlSjostrand on develop

    Prepare for release 2.0.2 Post release version bump (compare)

  • Feb 14 2019 21:41

    KarlSjostrand on v2.0.2

    (compare)

  • Feb 14 2019 21:40

    KarlSjostrand on master

    Post release version bump Updated deps Prepare for release 2.0.2 (compare)

  • Feb 14 2019 21:38

    KarlSjostrand on develop

    Updated deps (compare)

  • Feb 11 2019 14:57

    KarlSjostrand on v1.6.1

    (compare)

  • Feb 11 2019 14:56

    KarlSjostrand on v1.6.1

    Backport of S3 bulk delete bug … (compare)

  • Feb 11 2019 13:27

    KarlSjostrand on develop

    Prepare for release 2.0.1 Post release version bump (compare)

  • Feb 11 2019 13:10

    KarlSjostrand on v2.0.1

    (compare)

  • Feb 11 2019 13:08

    KarlSjostrand on master

    Post release version bump Added explicit lengths to anon … Fixed bulk delete. Map only cre… and 7 more (compare)

  • Feb 11 2019 12:37

    KarlSjostrand on develop

    Fixed tests (compare)

  • Feb 09 2019 16:08

    KarlSjostrand on develop

    Another fix for flat series ord… (compare)

  • Feb 09 2019 16:01

    KarlSjostrand on develop

    Another fix for flat series ord… (compare)

  • Feb 09 2019 01:21

    KarlSjostrand on develop

    Fix for flatSeries id problem (compare)

  • Feb 08 2019 21:21

    KarlSjostrand on develop

    Updated deps. Fixed foreign key… (compare)

  • Feb 08 2019 14:14

    KarlSjostrand on 328-bulk-delete

    (compare)

  • Feb 08 2019 14:14

    KarlSjostrand on develop

    Fixed bulk delete. Map only cre… Merge pull request #329 from sl… (compare)

  • Feb 08 2019 14:14
    KarlSjostrand closed #329
Ghislain Antony Vaillant
@ghisvail
but not quite sure whether dcm4che-streams can handle 2)
besides I need the Patient / Series / Study information in 2) to create the DICOM results in 5)
Does it make sense?
Karl Sjöstrand
@KarlSjostrand
@ghisvail Here's a quick stab at a fully streaming implementation of your use-case (compiling but definitely not tested). There a few things to refine here. First the function that encapsulates the response data from the service API is not implemented. Further, I think you will need to modify this so that you make one request per fragment in the input DICOM secondary capture image. Anyway, let me know if I understood the use-case correctly.
  val upload: Route = {
    import se.nimsa.dcm4che.streams.DicomFlows._
    import se.nimsa.dcm4che.streams.DicomParseFlow._

    val pool: Flow[(HttpRequest, DicomAttributes), (Try[HttpResponse], DicomAttributes), Http.HostConnectionPool] =
      Http(system).cachedHostConnectionPool[DicomAttributes](host = "myimageapi.com", port = 8080)

    def encapsulate(attributes: DicomAttributes, source: Source[ByteString, Any]): Source[ByteString, Any] = ???

    path("upload") {
      post {
        extractDataBytes { (dicomBytes: Source[ByteString, Any]) =>
          complete {
            dicomBytes
              .via(validateFlowWithContext(Seq(ValidationContext(UID.JPEGBaseline1, UID.ExplicitVRLittleEndian)), drainIncoming = true))
              .via(parseFlow)
              .via(collectAttributesFlow(Set(Tag.PatientName, Tag.PatientID, Tag.StudyInstanceUID, Tag.SeriesInstanceUID, Tag.SOPInstanceUID)))
              .statefulMapConcat(() => {
                var inPixelData = false
                val (queue, source) =
                  Source
                    .queue[ByteString](25, OverflowStrategy.backpressure)
                    .prefixAndTail(0).map(_._2)
                    .toMat(Sink.head)(Keep.both)
                    .run()

                {
                  case a: DicomAttributes =>
                    HttpRequest(
                      method = HttpMethods.POST,
                      uri = "/image",
                      entity = HttpEntity(ContentType(MediaTypes.`image/jpeg`), Source.fromFutureSource(source))
                    ) -> a :: Nil
                  case h: DicomHeader =>
                    if (h.tag == Tag.PixelData) inPixelData = true
                    Nil
                  case v: DicomValueChunk if inPixelData =>
                    queue.offer(v.bytes)
                    if (v.last) {
                      inPixelData = false
                      queue.complete()
                    }
                    Nil
                  case _: => Nil
                }
              })
              .via(pool)
              .map {
                case (scala.util.Success(response), attributes) => attributes -> response.entity.dataBytes
                case (scala.util.Failure(_), attributes) => throw new RuntimeException("oups")
              }
              .flatMapConcat {
                case (attributes, bytes) => encapsulate(attributes, bytes)
              }
          }
        }
      }
    }
  }
Ghislain Antony Vaillant
@ghisvail
That's a very nice stab indeed
Ghislain Antony Vaillant
@ghisvail
In my case the input / output of the pipeline should be the DICOM pushes via Scu / Scp. For now, I am just interacting with files, so I should have a FileIO Source and Sink on both edges.
The API itself is gRPC based, so I was able to auto-generate an Akka Stream service class for it using ScalaPB
but the signature is something like Flow[MessageIn, MessageOut, NotUsed], whereby both MessageIn and MessageOut contains the image raw data
Ghislain Antony Vaillant
@ghisvail
but the final encapsulation step should use the DicomAttributes extracted from the input data to for the output DICOM data, which I guess is the purpose of via(collectAttributesFlow) combined with flatMapConcat?
Karl Sjöstrand
@KarlSjostrand
@ghisvail thanks for clarifying. Changing the above input / output to file handling is straight forward as you point out. If you change this to SCP/SCU we get back to the discussion earlier that this can be done using dcm4che integrated with akka streams but it will not become a fully streaming and backpressured solution, at least I don't think that's possible. Ok, so the other difference is that instead of .via(pool) above which sends a HTTP request you are using gRPC. The nice thing about the http connection pool in akka streams is that each request is paired with some domain object (a DicomAttributes above) so that you can handle responses correctly (since they happen in parallel and may be out of order). It also means that you can carry information through this service call which we use to propagate the necessary DICOM information for encapsulation. Since you don't have this in your flow I guess the gRPC service call is synchronous? Is there any way you can make this call a Future instead of a Flow? If so you would make the call using .mapAsync and this would make it easier to access the DicomAttributes object after the call finishes.
Ghislain Antony Vaillant
@ghisvail
gRPC is also async actually.
I see what you mean, using gRPC I might lose the benefit of the dedicated contexts created for each HTTP request for which the corresponding DICOM attributes are stored
Ghislain Antony Vaillant
@ghisvail
hang on, actually scalapb allows me to make grpc calls with a Future
how would the call to mapAsync work within the workflow you wrote above?
Karl Sjöstrand
@KarlSjostrand
@ghisvail Keeping everything else the same and sticking with HTTP instead of gRPC (since I don't know it too well), using mapAsync instead of via(pool) would be something like
              .mapAsync(8) {
                case (request, attributes) => 
                  http.singleRequest(request)
                    .map(response => scala.util.Success(response) -> attributes)
                    .recover {
                      case e: Throwable => scala.util.Failure(e) -> attributes
                    }
              }
This would allow up to 8 concurrent requests to be made and we propagate the attributes object by mapping over the result.
Ghislain Antony Vaillant
@ghisvail
One more question, how will I be able to access the individual values of the collected attributes in the encapsulate function?
Karl Sjöstrand
@KarlSjostrand
@ghisvail DicomAttributes is a standard domain object with the signature case class DicomAttributes(attributes: Seq[DicomAttribute]). DicomAttribute has the signature case class DicomAttribute(header: DicomHeader, valueChunks: Seq[DicomValueChunk]). What can become tricky is the interpretation of the value bytes (the Seq[DicomValueChunk] in each attribute) since some attributes, like PatientName, can be encoded by one or more character sets. If you have access to dcm4che in your project you can put the information in a dcm4che Attributes object (including the SpecificCharacterSet attribute) as this class will take care of parsing the bytes for you. As an alternative, we are currently working on a new flow in dicom-streams to transcode all dicom values to utf-8. This will simplify interpretation of the bytes a lot since you can then gather all bytes for an attribute in a ByteString object and just call .utf8String.
Ghislain Antony Vaillant
@ghisvail
What I meant was: upstream I specified collectAttributesFlow(Seq(Tag1, Tag2, Tag3...)) and downstream I get a DicomAttributes(Seq[DicomAttribute]]). I suppose the mapping between the Seq in collectAttributesFlow and the Seq in DicomAttributes is positional, right?
Or is there a way to do a attributes.get(Tag1) and get the corresponding DicomAttribute (wrapped in a monadic type like Try or Option)?
Ghislain Antony Vaillant
@ghisvail
I am getting some AbruptIOTerminationException on when importing some DICOM files on version 1.4.2, both with Docker and on a host deployment. Do you guys know where it could come from?
Whereas on 1.4.1 I am getting a DicomStreamException due to a unsupported presentation context.
I'd consider it a regression, considering the loss of explicitness in the new error.
Ghislain Antony Vaillant
@ghisvail
After putting the right settings in slicebox.conf (accepted-sop-instances and accepted-transfer-syntaxes), the issue is gone. Still, I think the error message in version 1.4.1 was more informative than in 1.4.2.
Karl Sjöstrand
@KarlSjostrand
@ghisvail Totally agree. It sounds like a regression, I'll take a look.
On your comment about attributes - apologies for not responding yet. I am working on handling of attributes which has no dependency on dcm4che but it is taking me a while to find something that feels right. Work resides in the branch https://github.com/slicebox/dicom-streams/tree/feature/dicom-attributes. I'll update here when it's finished.
Ghislain Antony Vaillant
@ghisvail
No problem Karl, glad you did not forget about me :-)
I am still getting up to speed with akka-streams and dicom-streams
Ghislain Antony Vaillant
@ghisvail
Quick question: in the collectAttributesFlow, is there a reason why the parameter is of tags: Set[Int] and not tags: Set[TagPath] or tags: Set[Tag]? That latter two would yield an interface which is safer and more explicit (any Int vs a DICOM tag).
In particular when discovering the library API via an IDE which provides a type hinter (like Intellij). In my experience, you can tell when a library is well designed, when your are lead nicely by its type signatures.
Karl Sjöstrand
@KarlSjostrand
Good question. That method calls the more general method with parameters (tagCondition: TagPath => Boolean, stopCondition: TagPath => Boolean, maxBufferSize: Int) where tagCondition as you can see is based on tag paths, not Ints. The thinking was that the simplified case would be only for attributes in the root dataset. Type safety is always nice though so a Tag class might make sense. What do you think?
Ghislain Antony Vaillant
@ghisvail
I am confused as to why TagPath would not work here? Afterall, it's a wrapper type over the actual tag value as Int according to the content of the trait. So operation likes Set[Int].max used to find the maximum tag in the input set for collectAttributesFlow should work with a Set[TagPath], assuming the latter implements the necessary ordering functionality, no?
The problem with the Tag class is that it's already taken by dcm4che so you'd be in risk of introducing conflicts to the compiler, I suppose?
Karl Sjöstrand
@KarlSjostrand
I see you point. I made this change in the branch I'm working on and it looks good. Need to test this some more but I'm hoping that this will be merged within a week or so. I am not worried about the naming of the Tag class. If you want to access these classes (Tag, Keyword, UID) from both dcm4che and dicom-streams in the same Scala class you can always rename them when importing, e.g. import org.dcm4che.data.{Tag => CheTag} but I'm hoping that dicom-streams will work without dcm4che in many situations going forward.
Ghislain Antony Vaillant
@ghisvail
Fyi, I just filed #276 regarding the regression of the failure message mentioned earlier.
I see your point. What would be the difference between Tag and TagPath then?
Perhaps #276 could be a good newcomer bug to be fixed by someone like me?
Ghislain Antony Vaillant
@ghisvail
Probably a stupid question. How do you guys communicate between a Slicebox instance running in Docker and the host machine (running storescu / storescp)?
Ghislain Antony Vaillant
@ghisvail
I run docker with docker run -d -p 5000:5000 -p 11112 and left 11112 unbound for SCU / SCP
Karl Sjöstrand
@KarlSjostrand
@ghisvail Regarding #276 we provided a fix last week (slicebox/slicebox@2f286a4), but thanks for offering to submit a PR! Tons of other issues to work on if you can find the time.
Ghislain Antony Vaillant
@ghisvail
If you can tag the ones you consider newcomer friendly, then I'll happily have a look
Karl Sjöstrand
@KarlSjostrand
The Tag class provides a list of tag numbers defined as constants. If you think of a DICOM dataset as a tree of elements with data elements at the leaves and sequence elements elsewhere, then the TagPath data structure describes a path from the root into this tree. You can specify which items of sequences to go into, and also specify "all items" which means that a TagPath can point to multiple branches at once. One example of where this is useful is when modifying attributes. The tag path (0101,0202)[*].(0010,0010) would point to the (0010,0010) tag in all items of the (0101,0202) sequence - if that is what you wish to modify.
Great idea with the newcomer issue label. Looking at the Akka repo, they have labels good first issue and small. Do you prefer one of these?
Ghislain Antony Vaillant
@ghisvail
Thanks for clarifying
Your call for the newcomer tag name :-)
Karl Sjöstrand
@KarlSjostrand
Hmm, I'll add small first and see how we go :smile:
Karl Sjöstrand
@KarlSjostrand
Ok first issue tagged with small.
Michael Kober
@kobmic
@ghisvail Regarding your question about scu/scp and running in docker - I have to admit that I never tested that until now. Easiest way should be docker run -p 5000:5000 -p 11112:11112 or if you want to add multiple scu/scp configs you'd probably define a port range: docker run -p 5000:5000 -p 11110-11119:11110-11119.
Ghislain Antony Vaillant
@ghisvail
I tried using -p 11112:11112 but then the host application (storescu or storescp for instance) complains the port is already bound
Michael Kober
@kobmic
OK, now I see the problem, my scu/scp test was from one sbx to another sbx on different hosts