Stringwhen running any query? I noticed they're all work when asking data from the DB, using those DB names and row names.
Carlos - You have a number of SQL engines (mySQL, Microsoft, etc) and SQLite tries to help out by being compatible with them as they all work a little differently. This means you could use an apostrophe or quote but SQLite tries to work out what you really mean.
Generally, strings are enclosed in apostrophe's, so select from table where field1='Fred' means it'll find data that matches the string 'Fred'. If you use field1="fred" then it tries to find a column (or table, view, etc) that matches "fred". BUT, if it doesn't find one, SQLite thinks, "aah, did you mean 'Fred'?" so that might also work. You shouldn't rely on these though, and should ideally get the quotation marks correct.
Ideally, all fields and tables should be a singular word without spaces. So don't call a table "table 1". If you do, you have to put quotation marks around it (not apostrophes) when using it in SQL - eg, select from "table 1" where field1='Blah blah'
Does any of this make any sense?
I have set the prefetch value to 255
I don't believe this helps in this instance - it only helps when scrolling through the table - it'll grab 255 records at once. Usual values are ~50,000 so it grabs 50,000 records in a go.
Without being rude, why are you using the data browser tab for such a large number of records? At 100 GB, this is going to be a lot of rows ... why would you want these all displayed? Usually, once the data is in the database, you perform actions on that data - how many records with a value of X in field Y, how many records with a value of Y in field Z, etc. You can perform that using SQL - eg, you don't view all million rows to find out there are a million rows. If you're scrolling through data, you can use 'limit' to make this manageable.
In my eyes, this is like opening a 900,000 page word document, then wondering why it takes a while to scroll up and down. Some will argue that DB4S should handle 80 million rows, but I fail to see the advantage of viewing this on screen... (just my personal opinion, of course!)
I don't want to re-open the issue and bother the developers if I made a mistake...
I would recommend opening a new issue, and making reference to the other issue - don't open an older one. The older one refered to a different version of DB4S, plus it will notify the original poster, when really you need your own new shiny issue that belongs just to you. Open it up as it might highlight something wrong with DB4S that can be fixed.
There is already an open issue where the number of rows visible in the grid is limited - possible due to the control we use in DB4S. This comes back to my (waaaaaaaay too long) point above that displaying a large number of results is a nuts thing to do anyway.
But if DB Browser for SQLite is really built in a way to forcibly load the entire database
Not intentionally it doesn't - there is a lot of work to make it as smooth as possible (the reason for the 'prefetch block size') but the first thing it has to do is work out how many rows the table has, so it does a 'select count(*) from table', and it's this that can hammer large databases. Even with an index, a 100 GB database is going to take time to get this figure.
How do you manage to count the number of rows so quickly ?
I knoweth not. I tend to steer clear of the DB4S code ... here be dragons. I know a lot has gone into making it optimised though... tweaks here and there, etc.
The application log in DB4S might assist in working out where the freeze is occurring.
If you don't need to edit records, the 'execute SQL' tab is useful to grab records, and that should be speedy. The grid is 'read only' which isn't always great.
But I thought it was impossible to have an accurate count of the number of rows quickly (i.e. faster than "select count(*)" which is taking ages
Depends on the version of SQLite. I know SQLite itself (so not DB4S) has sexy internal methods to make this query fast. Depending on the table contents, it can be quicker doing a 'select (1)' rather than a 'select (*)' which would pull in a load of data, only to discard it.
So there is something wrong around this last query but no means for me to debug this further
That's useful to know - thanks for digging that out.
I would have suggested raising an official issue for it, as it's something we'd like to get fixed, but development has ... er, kinda stalled at the moment and the issue count is >470, so it's going to get lost in the noise for a while, so won't get solved in the short-term.