select * from project_application_records par inner join (select * from project_version pv where pv.project_id = par.project_id order by version desc limit 1) as prv
[42S22][1054] Unknown column 'par.project_id' in 'where clause
id bigint(20) NO PRI auto_increment
application_info json NO "" ""
application_method int(11) NO "" ""
created_at datetime NO "" ""
process_instance_id varchar(255) NO "" ""
status int(11) NO "" ""
updated_at datetime NO "" ""
dynamic_form_id bigint(20) NO MUL ""
project_id bigint(20) YES MUL ""
user_id bigint(20) YES MUL ""
real_time_application_status varchar(255) YES "" ""
table1
with utf8mb4
collation and table2
with latin1_swedish_ci
collation, if I join them is there a big performance hit?
Approaches to handle 10 million records of 10 tables each with several joins
I have 10 million records of 10 tables each with several joins. I am looking for the best alternative or DB assign or approach to read the records very quickly that is the query should be fast.
option#1 - normalize the tables, don't go for joins unnecessarily
option#2 - add all the columns in 1st query where multiple times the
where conditions will be used in the looping construct
option#3 - go for nosql database instead of mysql
Please advise
Thanks