Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 14:19
    pchemguy commented #2560
  • 14:19
    pchemguy commented #2560
  • 11:31
    justinclift commented #2560
  • 07:24
    scottfurry commented #2560
  • 07:20
    scottfurry commented #2560
  • 07:19
    singalen commented #2560
  • 07:14
    scottfurry commented #2560
  • 07:14
    scottfurry commented #2560
  • 07:13
    singalen commented #2560
  • 07:12
    singalen commented #2560
  • 07:10
    scottfurry commented #2560
  • 07:09
    singalen commented #2560
  • 06:47
    singalen commented #2560
  • Jan 23 18:07
    JC3 commented #2950
  • Jan 23 18:04
    JC3 commented #2950
  • Jan 23 18:04
    JC3 commented #2950
  • Jan 23 17:59
    JC3 commented #2950
  • Jan 23 17:56
    JC3 commented #2950
  • Jan 23 17:55
    JC3 commented #2950
  • Jan 23 17:55
    JC3 commented #2950
Chris Locke
@chrisjlocke
image.png
If you click 'Edit' and 'Preferences' then the 'Data Browser' tab, there is a 'threshold for completion' option. Set this to zero and the functionality should be disabled.
1 reply
Carlos W. Mercado
@carloswm85
hELLOW
I was writting some queries for the first time using sqlb
I was wondering, what's the difference between 'String', "String" and String when running any query? I noticed they're all work when asking data from the DB, using those DB names and row names.
Chris Locke
@chrisjlocke

Carlos - You have a number of SQL engines (mySQL, Microsoft, etc) and SQLite tries to help out by being compatible with them as they all work a little differently. This means you could use an apostrophe or quote but SQLite tries to work out what you really mean.
Generally, strings are enclosed in apostrophe's, so select from table where field1='Fred' means it'll find data that matches the string 'Fred'. If you use field1="fred" then it tries to find a column (or table, view, etc) that matches "fred". BUT, if it doesn't find one, SQLite thinks, "aah, did you mean 'Fred'?" so that might also work. You shouldn't rely on these though, and should ideally get the quotation marks correct.
Ideally, all fields and tables should be a singular word without spaces. So don't call a table "table 1". If you do, you have to put quotation marks around it (not apostrophes) when using it in SQL - eg, select
from "table 1" where field1='Blah blah'

Does any of this make any sense?

1 reply
jessie-github
@jessie-github
Hello, it seems that I am running into this issue with version 3.12.1 in Windows with a big database (100GB, one big table): sqlitebrowser/sqlitebrowser#584
I don't have any issue with a 5GB database.
On the "Browser Data" tab, the row count is displaying after a few seconds at the bottom and after that it is frozen with the bottom right message "Busy (reading rows)".
I have set the prefetch value to 255, but it doesn't help.
CPU usage is high but I still have enough RAM available.
Can somebody tell me if I have missed something ? I don't want to re-open the issue and bother the developers if I made a mistake... Thanks !
Chris Locke
@chrisjlocke

I have set the prefetch value to 255

I don't believe this helps in this instance - it only helps when scrolling through the table - it'll grab 255 records at once. Usual values are ~50,000 so it grabs 50,000 records in a go.

Without being rude, why are you using the data browser tab for such a large number of records? At 100 GB, this is going to be a lot of rows ... why would you want these all displayed? Usually, once the data is in the database, you perform actions on that data - how many records with a value of X in field Y, how many records with a value of Y in field Z, etc. You can perform that using SQL - eg, you don't view all million rows to find out there are a million rows. If you're scrolling through data, you can use 'limit' to make this manageable.
In my eyes, this is like opening a 900,000 page word document, then wondering why it takes a while to scroll up and down. Some will argue that DB4S should handle 80 million rows, but I fail to see the advantage of viewing this on screen... (just my personal opinion, of course!)

I don't want to re-open the issue and bother the developers if I made a mistake...

I would recommend opening a new issue, and making reference to the other issue - don't open an older one. The older one refered to a different version of DB4S, plus it will notify the original poster, when really you need your own new shiny issue that belongs just to you. Open it up as it might highlight something wrong with DB4S that can be fixed.
There is already an open issue where the number of rows visible in the grid is limited - possible due to the control we use in DB4S. This comes back to my (waaaaaaaay too long) point above that displaying a large number of results is a nuts thing to do anyway.

jessie-github
@jessie-github
This is a valid question. The database is a dump of important information that we want preserved and available for query (i.e. an archive) and/or to be used as an input for other tools (i.e an alternative to CSV files). However in some case, we want to be able to query the database (it is indexed of course), to look at a particular record. For instance if we have a bug somewhere, we may want to look at the record and its surrounding.
Perhaps this is an edge use case, but I am trying to promote SQLite over CSV files (for large exchange files), and people need to have a look at the data before it is processed.
(for instance on emacs I can open a 100GB CSV file without loading the complete file in memory, so this is instantaneous)
Chris Locke
@chrisjlocke
My point is (or was...) is someone going to load a grid with twenty million records and scroll through the lot? Or are they going to narrow this down into chunks.
You can't compare DB4S to emacs - it's like my example above - Word wouldn't open a 90,000 page document.
An SQLite database is a million times better than a CSV for querying - a database is built for querying, while a CSV isn't.
The 'browse data' tab is designed for small tables as it loads every record and relies on a filter. What would be better (for your example) is a grid asking 'what do you want to look at?' (eg, a numeric range or date range) so you can scroll through that. Again, a subset, rather than the complete table.
jessie-github
@jessie-github
Indeed and this what I am often doing as well. Sometimes I just want to browse through the first hundred of lines to see what is the content and other times I want to filter to have only have rows corresponding to the criteria. Problem here is that the GUI is frozen and I cannot enter any filter at the top of the column.
I am using (abusing ?) it a bit like I would use Excel to browse/filter/display a table with a lot of data.
For debugging it is really useful. Manipulating large CSV files (as input for other tools), it is really useful to be able to open the file (without loading the entire file in memory) and explore a bit the content of the file to see how to parse it (for instance to check that the date are of the standard YYYY-MM-DD instead of MM/DD/YYYY).
Similarly for SQLite file, I wanted to have a peek without having to (manually) type a query to display the first hundreds of lines.
jessie-github
@jessie-github
But if DB Browser for SQLite is really built in a way to forcibly load the entire database when using the "Browse Data" tab, no problem I'll try to find another way. I just thought that it would be useful to offer the possibility to load in chunks (like the emacs vlf package) but I can understand that it is not a feature which is often requested.
Chris Locke
@chrisjlocke

But if DB Browser for SQLite is really built in a way to forcibly load the entire database

Not intentionally it doesn't - there is a lot of work to make it as smooth as possible (the reason for the 'prefetch block size') but the first thing it has to do is work out how many rows the table has, so it does a 'select count(*) from table', and it's this that can hammer large databases. Even with an index, a 100 GB database is going to take time to get this figure.

jessie-github
@jessie-github
But are you not using max(rowid) to estimate the number of rows ? It seems that it is working (I can see the number of rows at the bottom quite quickly)
jessie-github
@jessie-github
Yes only 5-10 seconds to get the row count for a 100GB database, so I am pretty sure that it is using max(rowid)
Chris Locke
@chrisjlocke
Max(rowid) doesn't tell you the number of rows in a table .. that just shows the highest row number. For tables with auto increment on where rowids aren't recycled, that's not going to be accurate at all - i have a table with one row with a rowid in the thousands...
jessie-github
@jessie-github
I am not sure, but otherwise how is DB Browser for SQLite is calculating the number of rows so quickly (5 seconds for a 100GB file ?). I never deleted any rows so rowid is accurate. Perhaps I should try deleting some rows to see if the number at the bottom is changed. But I thought it was impossible to have an accurate count of the number of rows quickly (i.e. faster than "select count(*)" which is taking ages).
jessie-github
@jessie-github
Ok I tried and deleted some rows and the count displayed by DB Browser for SQLite is quick and accurate ! How do you manage to count the number of rows so quickly ?
Anyway the reason for the freeze is not the process which counts the number of rows then, because the freeze is happening after the count is displayed.
Chris Locke
@chrisjlocke

How do you manage to count the number of rows so quickly ?

I knoweth not. I tend to steer clear of the DB4S code ... here be dragons. I know a lot has gone into making it optimised though... tweaks here and there, etc.
The application log in DB4S might assist in working out where the freeze is occurring.
If you don't need to edit records, the 'execute SQL' tab is useful to grab records, and that should be speedy. The grid is 'read only' which isn't always great.

But I thought it was impossible to have an accurate count of the number of rows quickly (i.e. faster than "select count(*)" which is taking ages

Depends on the version of SQLite. I know SQLite itself (so not DB4S) has sexy internal methods to make this query fast. Depending on the table contents, it can be quicker doing a 'select (1)' rather than a 'select (*)' which would pull in a load of data, only to discard it.

jessie-github
@jessie-github
Ok I looked at the Application Log and indeed to count the number of rows, it is using "SELECT COUNT()" and it is returning the result in 3 seconds (I am surprised !).
The next query is "SELECT 'rowID',
from table LIMIT 0; 253" and the GUI is frozen from there (the first lines does not appear). I tried executing this query outside and it is returning the result instantly.
So there is something wrong around this last query but no means for me to debug this further (nothing in the Error Log as well).
Chris Locke
@chrisjlocke

So there is something wrong around this last query but no means for me to debug this further

That's useful to know - thanks for digging that out.
I would have suggested raising an official issue for it, as it's something we'd like to get fixed, but development has ... er, kinda stalled at the moment and the issue count is >470, so it's going to get lost in the noise for a while, so won't get solved in the short-term.

jessie-github
@jessie-github
No problem, I'll open it and at least we'll have a trace. It is very easy to reproduce. Thanks for your answers
jessie-github
@jessie-github
Justin Clift
@justinclift
Cool. Yeah, DB4S isn't optimised for large databases, even though SQLite itself is a decent enough format for transporting the data between systems.
At present, DB4S loads things into ram and doesn't clear out old loaded data from ram. So, as you scroll through stuff, ram usage just goes higher and higher.
With a (say) 1GB database, you could be using 4GB ram to view it. And ditto, with a 2GB database, you could be using some multiple of 2GB ram to keep it in memory. So, a 100GB database is going to go badly with DB4S if you try and scroll through the whole thing without several hundred GB's of ram.
... and that's not even counting the SQLite backend's processing time for running queries, though indexes can definitely help there. :)
Justin Clift
@justinclift
... and Qt (the library we use for displaying data) seems to have a maximum # of rows it'll handle in it's widgets (16 mill or so from super rough memory). Someone reported a bug about it a few years ago, which Qt team responded as not being interested in changing "you're doing it wrong" kind of response (my wording). ;)
mseaworthy
@mseaworthy
Wondering if there is a way to import multiple CSVs simultaneously in DB Browser? When I attempt to do this, I appears to combine them and they don't end up as separate tables in the db.
Bui Thanh Xuan
@xuan2261
Hello everyone, I have a problem like this: I have a sqlite database with many tables, is there any way to set certain tables to be read-only?
smartmic
@smartmic
Hi there, where can I set the command for "Open in external application"? I want to edit a text cell with the editor of my choice but it opens Firefox instead. I could not find any settings in Preferences and I could also find no documentation about the feature.
Chris Locke
@chrisjlocke

I have a sqlite database with many tables, is there any way to set certain tables to be read-only?

Unfortunately not - a database is a collection of tables, and each table is a collection of rows. That data is either all readable, or if the database is in a read-only file structure, all the data is read-only. If you wanted certain tables to be read-only, you'd have to separate them out into their own database, and make that database read-only. You could then attach the two databases together. This isn't a great solution though, and if someone moved the database to a non-read-only location, could manipulate the database again.
You could encrypt the database, but again, that's an all-or-nothing - you can't just encrypt individual tables. Unless, you wrote the data yourself, of course - if your own application read/wrote to the database, you could write data in an encrypted format so 'joe public' couldn't edit it.

where can I set the command for "Open in external application"?

I believe this saves a .txt file and tries to open that using whatever application normally opens a .txt file. In Firefox, what file is it showing? I assume you're editing text, and not an image or binary data. What operating system are you using?