热门标签 | HotTags
当前位置:  开发笔记 > 数据库 > 正文

LeftJoin

开发有个语句执行了超过2个小时没有结果,询问我到底为什么执行这么久。语句格式如下select*fromtgt1aleftjointgt2bona.idb.idanda.id6orderbya.id;这个是典型的理解错误,本意是要对a表进行过滤后进行[]leftjoin]的,我们来看看到底

开发有个语句执行了超过2个小时没有结果,询问我到底为什么执行这么久。 语句格式如下select * from tgt1 a left join tgt2 b on a.id=b.id and a.id=6 order by a.id; 这个是典型的理解错误,本意是要对a表进行过滤后进行 []left join] 的,我们来看看到底

开发有个语句执行了超过2个小时没有结果,询问我到底为什么执行这么久。
语句格式如下select * from tgt1 a left join tgt2 b on a.id=b.id and a.id>=6 order by a.id; 这个是典型的理解错误,本意是要对a表进行过滤后进行[]left join]的,我们来看看到底什么是真正的[left join]


[gpadmin@mdw ~]$ psql bigdatagp  
  
psql (8.2.15)  
  
Type "help" for help.  
  
  
  
bigdatagp=# drop table tgt1;  
  
DROP TABLE  
  
bigdatagp=# drop table tgt2;  
  
DROP TABLE  
  
bigdatagp=# explain  select t1.telnumber,t2.ua,t2.url,t1.apply_name,t2.apply_name from gpbase.tb_csv_gn_ip_session t1 ,gpbase.tb_csv_gn_http_session_hw t2 where  t1.bigdatagp=# \q                                                                                                                                                       bigdatagp=# create table tgt1(id int, name varchar(20));                                                                                                             NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'id' as the Greenplum Database data distribution key for this table.  
  
HINT:  The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.  
  
CREATE TABLE  
  
bigdatagp=# create table tgt2(id int, name varchar(20));   
  
NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'id' as the Greenplum Database data distribution key for this table.  
  
HINT:  The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.  
  
CREATE TABLE  
  
bigdatagp=# insert into tgt1 select generate_series(1,3),('a','b');  
  
ERROR:  column "name" is of type character varying but expression is of type record  
  
HINT:  You will need to rewrite or cast the expression.  
  
bigdatagp=# insert into tgt1 select generate_series(1,5),generate_series(1,5)||'a';  
  
INSERT 0 5  
  
bigdatagp=# insert into tgt2 select generate_series(1,2),generate_series(1,2)||'a';      
  
INSERT 0 2  
  
bigdatagp=# select * from tgt1;  
  
 id | name   
  
----+------  
  
  2 | 2a  
  
  4 | 4a  
  
  1 | 1a  
  
  3 | 3a  
  
  5 | 5a  
  
(5 rows)  
  
  
  
bigdatagp=# select * from tgt1 order by id;  
  
 id | name   
  
----+------  
  
  1 | 1a  
  
  2 | 2a  
  
  3 | 3a  
  
  4 | 4a  
  
  5 | 5a  
  
(5 rows)  
  
  
  
bigdatagp=# select * from tgt2 order by id;   
  
 id | name   
  
----+------  
  
  1 | 1a  
  
  2 | 2a  
  
(2 rows)  
  
  
  
bigdatagp=# select * from tgt1 a left join tgt2 b on a.id=b.id;  
  
 id | name | id | name   
  
----+------+----+------  
  
  3 | 3a   |    |   
  
  5 | 5a   |    |   
  
  1 | 1a   |  1 | 1a  
  
  2 | 2a   |  2 | 2a  
  
  4 | 4a   |    |   
  
(5 rows)  
  
  
  
bigdatagp=# select * from tgt1 a left join tgt2 b on a.id=b.id order by a.id;  
  
 id | name | id | name   
  
----+------+----+------  
  
  1 | 1a   |  1 | 1a  
  
  2 | 2a   |  2 | 2a  
  
  3 | 3a   |    |   
  
  4 | 4a   |    |   
  
  5 | 5a   |    |   
  
(5 rows)  
  
  
  
bigdatagp=# select * from tgt1 a left join tgt2 b on a.id=b.id where id>=3 order by a.id;  
  
ERROR:  column reference "id" is ambiguous  
  
LINE 1: ...* from tgt1 a left join tgt2 b on a.id=b.id where id>=3 orde...  
  
                                                             ^  
  
bigdatagp=# select * from tgt1 a left join tgt2 b on a.id=b.id where a.id>=3 order by a.id;  
  
 id | name | id | name   
  
----+------+----+------  
  
  3 | 3a   |    |   
  
  4 | 4a   |    |   
  
  5 | 5a   |    |   
  
(3 rows)  
  
  
  
bigdatagp=# select * from tgt1 a left join tgt2 b on a.id=b.id and a.id>=3 order by a.id;          
  
 id | name | id | name   
  
----+------+----+------  
  
  1 | 1a   |    |   
  
  2 | 2a   |    |   
  
  3 | 3a   |    |   
  
  4 | 4a   |    |   
  
  5 | 5a   |    |   
  
(5 rows)  
  
  
  
bigdatagp=# select * from tgt1 a left join tgt2 b on a.id=b.id where a.id>=6 order by a.id;   
  
 id | name | id | name   
  
----+------+----+------  
  
(0 rows)  
  
  
  
bigdatagp=# select * from tgt1 a left join tgt2 b on a.id=b.id and a.id>=6 order by a.id;       
  
 id | name | id | name   
  
----+------+----+------  
  
  1 | 1a   |    |   
  
  2 | 2a   |    |   
  
  3 | 3a   |    |   
  
  4 | 4a   |    |   
  
  5 | 5a   |    |   
  
(5 rows)  
  
  
  
bigdatagp=# explain analyze select * from tgt1 a left join tgt2 b on a.id=b.id where a.id>=3 order by a.id;  
  
                                                                    QUERY PLAN                                                                       
  
---------------------------------------------------------------------------------------------------------------------------------------------------  
  
 Gather Motion 64:1  (slice1; segments: 64)  (cost=7.18..7.19 rows=1 width=14)  
  
   Merge Key: "?column5?"  
  
   Rows out:  3 rows at destination with 21 ms to end, start offset by 559 ms.  
  
   ->  Sort  (cost=7.18..7.19 rows=1 width=14)  
  
         Sort Key: a.id  
  
         Rows out:  Avg 1.0 rows x 3 workers.  Max 1 rows (seg52) with 5.452 ms to first row, 5.454 ms to end, start offset by 564 ms.  
  
         Executor memory:  63K bytes avg, 74K bytes max (seg2).  
  
         Work_mem used:  63K bytes avg, 74K bytes max (seg2). Workfile: (0 spilling, 0 reused)  
  
         ->  Hash Left Join  (cost=2.04..7.15 rows=1 width=14)  
  
               Hash Cond: a.id = b.id  
  
               Rows out:  Avg 1.0 rows x 3 workers.  Max 1 rows (seg52) with 4.190 ms to first row, 4.598 ms to end, start offset by 565 ms.  
  
               ->  Seq Scan on tgt1 a  (cost=0.00..5.06 rows=1 width=7)  
  
                     Filter: id >= 3  
  
                     Rows out:  Avg 1.0 rows x 3 workers.  Max 1 rows (seg52) with 0.156 ms to first row, 0.158 ms to end, start offset by 565 ms.  
  
               ->  Hash  (cost=2.02..2.02 rows=1 width=7)  
  
                     Rows in:  (No row requested) 0 rows (seg0) with 0 ms to end.  
  
                     ->  Seq Scan on tgt2 b  (cost=0.00..2.02 rows=1 width=7)  
  
                           Rows out:  (No row requested) 0 rows (seg0) with 0 ms to end.  
  
 Slice statistics:  
  
   (slice0)    Executor memory: 332K bytes.  
  
   (slice1)    Executor memory: 446K bytes avg x 64 workers, 4329K bytes max (seg52).  Work_mem: 74K bytes max.  
  
 Statement statistics:  
  
   Memory used: 128000K bytes  
  
 Total runtime: 580.630 ms  
  
(24 rows)  
  
  
  
bigdatagp=# explain analyze  select * from tgt1 a left join tgt2 b on a.id=b.id and a.id>=3 order by a.id;   
  
                                                                       QUERY PLAN                                                                          
  
---------------------------------------------------------------------------------------------------------------------------------------------------------  
  
 Gather Motion 64:1  (slice1; segments: 64)  (cost=7.23..7.24 rows=1 width=14)  
  
   Merge Key: "?column5?"  
  
   Rows out:  5 rows at destination with 24 ms to end, start offset by 701 ms.  
  
   ->  Sort  (cost=7.23..7.24 rows=1 width=14)  
  
         Sort Key: a.id  
  
         Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 6.292 ms to first row, 6.294 ms to end, start offset by 715 ms.  
  
         Executor memory:  70K bytes avg, 74K bytes max (seg0).  
  
         Work_mem used:  70K bytes avg, 74K bytes max (seg0). Workfile: (0 spilling, 0 reused)  
  
         ->  Hash Left Join  (cost=2.04..7.17 rows=1 width=14)  
  
               Hash Cond: a.id = b.id  
  
               Join Filter: a.id >= 3  
  
               Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 4.422 ms to first row, 5.055 ms to end, start offset by 717 ms.  
  
               Executor memory:  1K bytes avg, 1K bytes max (seg42).  
  
               Work_mem used:  1K bytes avg, 1K bytes max (seg42). Workfile: (0 spilling, 0 reused)  
  
               (seg42)  Hash chain length 1.0 avg, 1 max, using 1 of 262151 buckets.  
  
               ->  Seq Scan on tgt1 a  (cost=0.00..5.05 rows=1 width=7)  
  
                     Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 0.179 ms to first row, 0.180 ms to end, start offset by 717 ms.  
  
               ->  Hash  (cost=2.02..2.02 rows=1 width=7)  
  
                     Rows in:  Avg 1.0 rows x 2 workers.  Max 1 rows (seg42) with 0.194 ms to end, start offset by 721 ms.  
  
                     ->  Seq Scan on tgt2 b  (cost=0.00..2.02 rows=1 width=7)  
  
                           Rows out:  Avg 1.0 rows x 2 workers.  Max 1 rows (seg42) with 0.143 ms to first row, 0.145 ms to end, start offset by 721 ms.  
  
 Slice statistics:  
  
   (slice0)    Executor memory: 332K bytes.  
  
   (slice1)    Executor memory: 581K bytes avg x 64 workers, 4353K bytes max (seg42).  Work_mem: 74K bytes max.  
  
 Statement statistics:  
  
   Memory used: 128000K bytes  
  
 Total runtime: 725.316 ms  
  
(27 rows)  
  
  
  
bigdatagp=# explain analyze select * from tgt1 a left join tgt2 b on a.id=b.id where a.id>=6 order by a.id;    
  
                                                  QUERY PLAN                                                    
  
--------------------------------------------------------------------------------------------------------------  
  
 Gather Motion 64:1  (slice1; segments: 64)  (cost=7.17..7.18 rows=1 width=14)  
  
   Merge Key: "?column5?"  
  
   Rows out:  (No row requested) 0 rows at destination with 6.536 ms to end, start offset by 1.097 ms.  
  
   ->  Sort  (cost=7.17..7.18 rows=1 width=14)  
  
         Sort Key: a.id  
  
         Rows out:  (No row requested) 0 rows (seg0) with 0 ms to end.  
  
         Executor memory:  33K bytes avg, 33K bytes max (seg0).  
  
         Work_mem used:  33K bytes avg, 33K bytes max (seg0). Workfile: (0 spilling, 0 reused)  
  
         ->  Hash Left Join  (cost=2.04..7.15 rows=1 width=14)  
  
               Hash Cond: a.id = b.id  
  
               Rows out:  (No row requested) 0 rows (seg0) with 0 ms to end.  
  
               ->  Seq Scan on tgt1 a  (cost=0.00..5.06 rows=1 width=7)  
  
                     Filter: id >= 6  
  
                     Rows out:  (No row requested) 0 rows (seg0) with 0 ms to end.  
  
               ->  Hash  (cost=2.02..2.02 rows=1 width=7)  
  
                     Rows in:  (No row requested) 0 rows (seg0) with 0 ms to end.  
  
                     ->  Seq Scan on tgt2 b  (cost=0.00..2.02 rows=1 width=7)  
  
                           Rows out:  (No row requested) 0 rows (seg0) with 0 ms to end.  
  
 Slice statistics:  
  
   (slice0)    Executor memory: 332K bytes.  
  
   (slice1)    Executor memory: 225K bytes avg x 64 workers, 225K bytes max (seg0).  Work_mem: 33K bytes max.  
  
 Statement statistics:  
  
   Memory used: 128000K bytes  
  
 Total runtime: 8.615 ms  
  
(24 rows)  
  
  
  
bigdatagp=# explain analyze select * from tgt1 a left join tgt2 b on a.id=b.id and a.id>=6 order by a.id;          
  
                                                                       QUERY PLAN                                                                         
  
--------------------------------------------------------------------------------------------------------------------------------------------------------  
  
 Gather Motion 64:1  (slice1; segments: 64)  (cost=7.23..7.24 rows=1 width=14)  
  
   Merge Key: "?column5?"  
  
   Rows out:  5 rows at destination with 115 ms to end, start offset by 1.195 ms.  
  
   ->  Sort  (cost=7.23..7.24 rows=1 width=14)  
  
         Sort Key: a.id  
  
         Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 6.979 ms to first row, 6.980 ms to end, start offset by 12 ms.  
  
         Executor memory:  72K bytes avg, 74K bytes max (seg0).  
  
         Work_mem used:  72K bytes avg, 74K bytes max (seg0). Workfile: (0 spilling, 0 reused)  
  
         ->  Hash Left Join  (cost=2.04..7.17 rows=1 width=14)  
  
               Hash Cond: a.id = b.id  
  
               Join Filter: a.id >= 6  
  
               Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 5.570 ms to first row, 6.157 ms to end, start offset by 12 ms.  
  
               Executor memory:  1K bytes avg, 1K bytes max (seg42).  
  
               Work_mem used:  1K bytes avg, 1K bytes max (seg42). Workfile: (0 spilling, 0 reused)  
  
               (seg42)  Hash chain length 1.0 avg, 1 max, using 1 of 262151 buckets.  
  
               ->  Seq Scan on tgt1 a  (cost=0.00..5.05 rows=1 width=7)  
  
                     Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 0.050 ms to first row, 0.051 ms to end, start offset by 12 ms.  
  
               ->  Hash  (cost=2.02..2.02 rows=1 width=7)  
  
                     Rows in:  Avg 1.0 rows x 2 workers.  Max 1 rows (seg42) with 0.153 ms to end, start offset by 18 ms.  
  
                     ->  Seq Scan on tgt2 b  (cost=0.00..2.02 rows=1 width=7)  
  
                           Rows out:  Avg 1.0 rows x 2 workers.  Max 1 rows (seg42) with 0.133 ms to first row, 0.135 ms to end, start offset by 18 ms.  
  
 Slice statistics:  
  
   (slice0)    Executor memory: 332K bytes.  
  
   (slice1)    Executor memory: 583K bytes avg x 64 workers, 4353K bytes max (seg42).  Work_mem: 74K bytes max.  
  
 Statement statistics:  
  
   Memory used: 128000K bytes  
  
 Total runtime: 116.997 ms  
  
(27 rows)  
  
  
  
bigdatagp=#  explain analyze select * from tgt1 a left join tgt2 b on a.id=b.id where id=6 order by a.id;  
  
ERROR:  column reference "id" is ambiguous  
  
LINE 1: ...* from tgt1 a left join tgt2 b on a.id=b.id where id=6 order...  
  
                                                             ^  
  
bigdatagp=#  explain analyze select * from tgt1 a left join tgt2 b on a.id=b.id where a.id=6 order by a.id;  
  
                                             QUERY PLAN                                                
  
-----------------------------------------------------------------------------------------------------  
  
 Gather Motion 1:1  (slice1; segments: 1)  (cost=7.17..7.18 rows=4 width=14)  
  
   Merge Key: "?column5?"  
  
   Rows out:  (No row requested) 0 rows at destination with 3.212 ms to end, start offset by 339 ms.  
  
   ->  Sort  (cost=7.17..7.18 rows=1 width=14)  
  
         Sort Key: a.id  
  
         Rows out:  (No row requested) 0 rows with 0 ms to end.  
  
         Executor memory:  58K bytes.  
  
         Work_mem used:  58K bytes. Workfile: (0 spilling, 0 reused)  
  
         ->  Hash Left Join  (cost=2.04..7.14 rows=1 width=14)  
  
               Hash Cond: a.id = b.id  
  
               Rows out:  (No row requested) 0 rows with 0 ms to end.  
  
               ->  Seq Scan on tgt1 a  (cost=0.00..5.06 rows=1 width=7)  
  
                     Filter: id = 6  
  
                     Rows out:  (No row requested) 0 rows with 0 ms to end.  
  
               ->  Hash  (cost=2.02..2.02 rows=1 width=7)  
  
                     Rows in:  (No row requested) 0 rows with 0 ms to end.  
  
                     ->  Seq Scan on tgt2 b  (cost=0.00..2.02 rows=1 width=7)  
  
                           Filter: id = 6  
  
                           Rows out:  (No row requested) 0 rows with 0 ms to end.  
  
 Slice statistics:  
  
   (slice0)    Executor memory: 252K bytes.  
  
   (slice1)    Executor memory: 251K bytes (seg3).  Work_mem: 58K bytes max.  
  
 Statement statistics:  
  
   Memory used: 128000K bytes  
  
 Total runtime: 342.067 ms  
  
(25 rows)  
  
  
  
bigdatagp=#  explain analyze select * from tgt1 a left join tgt2 b on a.id=b.id and a.id=6 order by a.id;        
  
                                                                       QUERY PLAN                                                                         
  
--------------------------------------------------------------------------------------------------------------------------------------------------------  
  
 Gather Motion 64:1  (slice1; segments: 64)  (cost=7.23..7.24 rows=1 width=14)  
  
   Merge Key: "?column5?"  
  
   Rows out:  5 rows at destination with 435 ms to end, start offset by 1.130 ms.  
  
   ->  Sort  (cost=7.23..7.24 rows=1 width=14)  
  
         Sort Key: a.id  
  
         Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 5.156 ms to first row, 5.158 ms to end, start offset by 7.597 ms.  
  
         Executor memory:  58K bytes avg, 58K bytes max (seg0).  
  
         Work_mem used:  58K bytes avg, 58K bytes max (seg0). Workfile: (0 spilling, 0 reused)  
  
         ->  Hash Left Join  (cost=2.04..7.17 rows=1 width=14)  
  
               Hash Cond: a.id = b.id  
  
               Join Filter: a.id = 6  
  
               Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 4.155 ms to first row, 4.813 ms to end, start offset by 7.930 ms.  
  
               Executor memory:  1K bytes avg, 1K bytes max (seg42).  
  
               Work_mem used:  1K bytes avg, 1K bytes max (seg42). Workfile: (0 spilling, 0 reused)  
  
               (seg42)  Hash chain length 1.0 avg, 1 max, using 1 of 262151 buckets.  
  
               ->  Seq Scan on tgt1 a  (cost=0.00..5.05 rows=1 width=7)  
  
                     Rows out:  Avg 1.0 rows x 5 workers.  Max 1 rows (seg42) with 0.126 ms to first row, 0.127 ms to end, start offset by 7.941 ms.  
  
               ->  Hash  (cost=2.02..2.02 rows=1 width=7)  
  
                     Rows in:  Avg 1.0 rows x 2 workers.  Max 1 rows (seg42) with 0.103 ms to end, start offset by 12 ms.  
  
                     ->  Seq Scan on tgt2 b  (cost=0.00..2.02 rows=1 width=7)  
  
                           Rows out:  Avg 1.0 rows x 2 workers.  Max 1 rows (seg42) with 0.074 ms to first row, 0.076 ms to end, start offset by 12 ms.  
  
 Slice statistics:  
  
   (slice0)    Executor memory: 332K bytes.  
  
   (slice1)    Executor memory: 569K bytes avg x 64 workers, 4337K bytes max (seg42).  Work_mem: 58K bytes max.  
  
 Statement statistics:  
  
   Memory used: 128000K bytes  
  
 Total runtime: 436.384 ms  
  
(27 rows)  

因此如果要对a表过滤需要把条件写在where里面,要对b表过滤需要把调教写在b表的子查询里面,至于[ON]只是用来控制显示的。

-EOF-


推荐阅读
  • 基于PgpoolII的PostgreSQL集群安装与配置教程
    本文介绍了基于PgpoolII的PostgreSQL集群的安装与配置教程。Pgpool-II是一个位于PostgreSQL服务器和PostgreSQL数据库客户端之间的中间件,提供了连接池、复制、负载均衡、缓存、看门狗、限制链接等功能,可以用于搭建高可用的PostgreSQL集群。文章详细介绍了通过yum安装Pgpool-II的步骤,并提供了相关的官方参考地址。 ... [详细]
  • 一、Hadoop来历Hadoop的思想来源于Google在做搜索引擎的时候出现一个很大的问题就是这么多网页我如何才能以最快的速度来搜索到,由于这个问题Google发明 ... [详细]
  • Oracle Database 10g许可授予信息及高级功能详解
    本文介绍了Oracle Database 10g许可授予信息及其中的高级功能,包括数据库优化数据包、SQL访问指导、SQL优化指导、SQL优化集和重组对象。同时提供了详细说明,指导用户在Oracle Database 10g中如何使用这些功能。 ... [详细]
  • 本文介绍了adg架构设置在企业数据治理中的应用。随着信息技术的发展,企业IT系统的快速发展使得数据成为企业业务增长的新动力,但同时也带来了数据冗余、数据难发现、效率低下、资源消耗等问题。本文讨论了企业面临的几类尖锐问题,并提出了解决方案,包括确保库表结构与系统测试版本一致、避免数据冗余、快速定位问题等。此外,本文还探讨了adg架构在大版本升级、上云服务和微服务治理方面的应用。通过本文的介绍,读者可以了解到adg架构设置的重要性及其在企业数据治理中的应用。 ... [详细]
  • 在说Hibernate映射前,我们先来了解下对象关系映射ORM。ORM的实现思想就是将关系数据库中表的数据映射成对象,以对象的形式展现。这样开发人员就可以把对数据库的操作转化为对 ... [详细]
  • 本文介绍了使用postman进行接口测试的方法,以测试用户管理模块为例。首先需要下载并安装postman,然后创建基本的请求并填写用户名密码进行登录测试。接下来可以进行用户查询和新增的测试。在新增时,可以进行异常测试,包括用户名超长和输入特殊字符的情况。通过测试发现后台没有对参数长度和特殊字符进行检查和过滤。 ... [详细]
  • 本文详细介绍了MysqlDump和mysqldump进行全库备份的相关知识,包括备份命令的使用方法、my.cnf配置文件的设置、binlog日志的位置指定、增量恢复的方式以及适用于innodb引擎和myisam引擎的备份方法。对于需要进行数据库备份的用户来说,本文提供了一些有价值的参考内容。 ... [详细]
  • 使用Ubuntu中的Python获取浏览器历史记录原文: ... [详细]
  • 本文由编程笔记小编整理,介绍了PHP中的MySQL函数库及其常用函数,包括mysql_connect、mysql_error、mysql_select_db、mysql_query、mysql_affected_row、mysql_close等。希望对读者有一定的参考价值。 ... [详细]
  • 本文介绍了Oracle数据库中tnsnames.ora文件的作用和配置方法。tnsnames.ora文件在数据库启动过程中会被读取,用于解析LOCAL_LISTENER,并且与侦听无关。文章还提供了配置LOCAL_LISTENER和1522端口的示例,并展示了listener.ora文件的内容。 ... [详细]
  • Spring特性实现接口多类的动态调用详解
    本文详细介绍了如何使用Spring特性实现接口多类的动态调用。通过对Spring IoC容器的基础类BeanFactory和ApplicationContext的介绍,以及getBeansOfType方法的应用,解决了在实际工作中遇到的接口及多个实现类的问题。同时,文章还提到了SPI使用的不便之处,并介绍了借助ApplicationContext实现需求的方法。阅读本文,你将了解到Spring特性的实现原理和实际应用方式。 ... [详细]
  • Java String与StringBuffer的区别及其应用场景
    本文主要介绍了Java中String和StringBuffer的区别,String是不可变的,而StringBuffer是可变的。StringBuffer在进行字符串处理时不生成新的对象,内存使用上要优于String类。因此,在需要频繁对字符串进行修改的情况下,使用StringBuffer更加适合。同时,文章还介绍了String和StringBuffer的应用场景。 ... [详细]
  • Oracle分析函数first_value()和last_value()的用法及原理
    本文介绍了Oracle分析函数first_value()和last_value()的用法和原理,以及在查询销售记录日期和部门中的应用。通过示例和解释,详细说明了first_value()和last_value()的功能和不同之处。同时,对于last_value()的结果出现不一样的情况进行了解释,并提供了理解last_value()默认统计范围的方法。该文对于使用Oracle分析函数的开发人员和数据库管理员具有参考价值。 ... [详细]
  • MyBatis错题分析解析及注意事项
    本文对MyBatis的错题进行了分析和解析,同时介绍了使用MyBatis时需要注意的一些事项,如resultMap的使用、SqlSession和SqlSessionFactory的获取方式、动态SQL中的else元素和when元素的使用、resource属性和url属性的配置方式、typeAliases的使用方法等。同时还指出了在属性名与查询字段名不一致时需要使用resultMap进行结果映射,而不能使用resultType。 ... [详细]
  • 本文详细介绍了在ASP.NET中获取插入记录的ID的几种方法,包括使用SCOPE_IDENTITY()和IDENT_CURRENT()函数,以及通过ExecuteReader方法执行SQL语句获取ID的步骤。同时,还提供了使用这些方法的示例代码和注意事项。对于需要获取表中最后一个插入操作所产生的ID或马上使用刚插入的新记录ID的开发者来说,本文提供了一些有用的技巧和建议。 ... [详细]
author-avatar
閲历_323_882
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有