Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Árni Hermann Reynisson
    @arnihermann
    what measures should I take to prevent sql injection with roc?
    Jeffrey N. Davis
    @penland365
    @arnihermann I’ve been working on on parameterized queries for the past couple of weeks - prepared queries should be landing in the next week of so. Right now though, there isn’t any way at the moment - my apologies on that. I’ve been trying to get the full transport layer completed before tackling requests.
    Árni Hermann Reynisson
    @arnihermann
    @penland365 ok awesome
    no worries
    the library feels very nice btw
    Jeffrey N. Davis
    @penland365
    Thanks!
    Fred Jonsson
    @enginoid

    Hey three! I'm looking into a strange issue that we've been encountering in a service where we're using roc, and was wondering if anyone has seen anything similar. The issue is that a number of our queries seem to have rows that don't contain the right columns when we map over them. Here's one example method that we're seeing runtime errors for:

      def getReportsCountAgainstPlayer(playerId: String): Future[Int] = {
        val req = new Request(s"""
           |SELECT COUNT(player_id) AS report_count
           |FROM player_reports WHERE player_id = '${escape(playerId)}'
         """.stripMargin)
    
        pgClient.query(req).map { result =>
          result.foldLeft(0) { (sum, row) =>
            row.get('report_count).as[Int]
          }
        }
      }

    Here's the exception that we're seeing for this method:

    scala.MatchError·roc.postgresql.failures$ElementNotFoundFailure: Could not find element 'report_count in Row (of class roc.postgresql.failures$ElementNotFoundFailure)
    Raw
    Now, in this case the query should never not contain report_count because even though no row is matched in the WHERE condition, postgres always returns that aggregate column (as zero, if no rows are matched).
    Fred Jonsson
    @enginoid
    We run this through a finch endpoint, and that's where I see this behavior. When I invoke the same method through the REPL, I'm not able to reproduce this.
    This is a low-volume service (internal tool) and I feel like I see this way more when the server/roc client is "cold". I just tried making 10 requests against one server and this error came up two times. I then repeatedly made the same 10 queries again and now all the responses are 200 responses.
    I've verified that there's not a timeout that's triggering some weird behavior -- in fact the roc timeout seems to be set to a whopping 10 seconds.
    So this kind of looks like a race of some sort, but I'm not entirely sure where it's being triggered or what's the best place to go investigate.
    Jeffrey N. Davis
    @penland365
    Interesting ( well, intellecutally, sorry for the issue! ).
    Fred Jonsson
    @enginoid
    Also, all our other endpoints using roc are behaving in this way with a similarly unpredictable pattern.
    Yes, I have to admit that I'm pretty enthralled by it myself!
    I did run a tcpdump too and when the client is failing in this way, there is a correct-looking response coming back from the postgres server (as far as my eyes can tell through the binary marks).
    That is, there does not seem to be a significant difference between the postgres query or the response for the cases where it fails versus when it does not fail.
    Jeffrey N. Davis
    @penland365
    So, my first thought is that you’re right - there is a race issue. By default ( for the moment ), there is only one connection right now, so all of these requests are queing up for each connection to complete.
    Jeffrey N. Davis
    @penland365
    When the command fails, what does result. completedCommand give?
    Fred Jonsson
    @enginoid
    That was actually what I wanted to find out via the REPL, but I wasn't able to reproduce this there. I should probably put in some logging and get back to you.
    Anything else you would look for?
    Jeffrey N. Davis
    @penland365
    result.completedCommand is a nice sanity check - it’s super easy to decode from the server, and will tell us if the issues is Roc side or Postgres side ( my guess is that it is Roc side somehow, but don’t know yet )
    You should see something like SELECT ROW_COUNT
    Fred Jonsson
    @enginoid
    Right, makes sense. I guess since this error occurs when we're mapping over rows, I might as well print the rows as we encounter them.
    Jeffrey N. Davis
    @penland365
    also, check what columns are being returned - you can do that via
    result.columns.foreach(c => println(c.name))
    The postgres style is to send a specific error code ( there are hundreds ) if anything at all goes wrong during qury phase. The fact that the result returns correctly and you are able to map over the result at all suggests that, from Postgres perspective, the Client Request / Response performed correctly
    Fred Jonsson
    @enginoid

    Thanks for your help @penland365. I've added logging and just attempted to reproduce the problem. I wasn't able to reproduce it right now (even though I've been able to pretty consistently), perhaps because:

    • The servers are in some kind of "hot state" after the deployment and they need to stand idling for a little bit before we can reproduce. (I'm skeptical of this explanation, at least in JIT terms they should be cold as ever since we're not pre-warming this server.)
    • The players I've requested are in postgres cache, and fast retrieval is less likely to trigger the race.
    • Logging is interfering with the race.

    At any rate, I'm going to leave the logging on debug overnight and wait for this to re-occur. Thanks for your help -- I look forward to reporting back!

    Jeffrey N. Davis
    @penland365
    Thanks!
    Fred Jonsson
    @enginoid
    Unfortunately, what I have to report at the moment is that the bug seems to be entirely gone. The last occurrence of the exception is right before I deployed the change. The system is used around the clock and we were getting a few errors every hour, so we should definitely have an error at this point (16 hours later).
    What I'm looking into now is whether the roc dependency changed and whether this was potentially a bug that is now fixed.
    Fred Jonsson
    @enginoid
    It looks like no significant dependencies or shared packages changed between the two revisions.
    Fred Jonsson
    @enginoid
    I only implemented logging in one endpoint, but multiple endpoints were failing with similar errors, so logging does not seem the cause.
    Out of insanity, I'm going to try turning logging off and see what happens.
    Jeffrey N. Davis
    @penland365
    :(
    Fred Jonsson
    @enginoid
    Logging has actually helped re-surface the errors (if there's at all a causal relationship), so I'm about to turn it back on.

    I've also discovered an interesting encoding issue that I'm trying to track down at the moment. I thought it would come down to configuration, but at this stage there's not a lot to configure in the client. I set up these functions to execute against a local postgres instance that I spun up:

    def getFood(): Future[String] = {
      val req = new Request(s"SELECT 'Empeñada' as course".stripMargin)
      pgClient.query(req).map { result =>
        result.map(_.get('course).as[String]).head
      }
    }
    
    def show(symbol: Symbol): Future[Seq[String]] = {
      val req = new Request(s"show ${symbol.name}".stripMargin)
      pgClient.query(req).map { result =>
        result.columns.foreach(column => println(column.name))
        result.map(_.get(symbol).as[String]).toSeq
      }
    }

    When I call getFood, it looks like something has gone wrong in the decoding of the string:

    scala> getFood().get
    res5: String = Empeᅢᄆada

    The client encoding and server encoding both seem set to UTF-8:

    scala> show('server_encoding).get
    res6: Seq[String] = List(UTF8)
    
    scala> show('client_encoding).get
    res7: Seq[String] = List(UTF8)
    At first I thought textDecoder was at fault, so I added a unit test with unicode characters but that one passed with flying colors
    Jeffrey N. Davis
    @penland365
    On the getFood().get comment, what terminal are you using?
    There is certainly a possibility the encoders are wrong, but they are tested w/ ScalaCheck, which generates a preponderance of Special Unicode characters during testing
    Fred Jonsson
    @enginoid
    Apple's Terminal with default settings. Let's see if I can tease the actual unicode values out of this with some Scala function to make sure that's not it.
    Fred Jonsson
    @enginoid
    I'm sure there are better ways to get at this, but it does look like this is the actual encoding of the string:
    scala> val food = getFood().get
    food: String = Empeᅢᄆada
    
    scala> food.foreach(char => println("%02X" format char.hashCode))
    45
    6D
    70
    65
    FFC3
    FFB1
    61
    64
    61
    I'll file away an issue for now. Please let me know if you have a hunch as to what this might be -- I might be able to dig in.
    Jeffrey N. Davis
    @penland365
    I think I see the error
    That needs to be
    val strValue = new String(bytes, StandardCharsets.UTF_8)
     Text(column.name, column.columnType, strValue)
    I hadn’t written tests yet for much of the Result, since that API was going to change at some point
    @enginoid Thanks for catching this!
    Can you log a bug for me on github? I’ll push a change today
    Fred Jonsson
    @enginoid
    Ahh, I see. Thanks for looking into it!
    Will do!