热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

关于C#HttpClient的请求

EfficientlyStreamingLargeHTTPResponsesWithHttpClientDownloadinglargefileswithHttpClienta

 

Efficiently Streaming Large HTTP Responses With HttpClient

Downloading large files with HttpClient and you see that it takes lots of memory space? This post is probably for you. Let's see how to efficiently streaming large HTTP responses with HttpClient.

I see common scenarios where people need to download large files (images, PDF files, etc.) on their .NET projects. What I mean by large files here is probably not what you think. It should be enough to call it large if it’s 500 KB as you will hit a memory limit once you try to download lots of files concurrently in a wrong way as below:

static async Task HttpGetForLargeFileInWrongWay()
{
using (HttpClient client = new HttpClient())
{
const string url = "https://github.com/tugberkugurlu/ASPNETWebAPISamples/archive/master.zip";
using (HttpResponseMessage respOnse= await client.GetAsync(url))
using (Stream streamToReadFrom = await response.Content.ReadAsStreamAsync())
{
string fileToWriteTo = Path.GetTempFileName();
using (Stream streamToWriteTo = File.Open(fileToWriteTo, FileMode.Create))
{
await streamToReadFrom.CopyToAsync(streamToWriteTo);
}

response.COntent= null;
}
}
}

By calling GetAsync method directly there, we are loading every single byte into memory. You can see this happening in a simple way by opening the Task Manager and observing the memory of the process.

2

We are calling ReadAsStreamAsync on HttpContent after the GetAsync method is completed. This will just get us the MemoryStream, so there is no point there:

Screenshot 2014-05-11 15.18.14

We need a way not to load the response body into memory and have the raw network stream so that we can pass the bytes into another stream without hitting the memory too hard. We can do it by just reading the headers of the response and then getting a handle for the network stream as below:

static async Task HttpGetForLargeFileInRightWay()
{
using (HttpClient client = new HttpClient())
{
const string url = "https://github.com/tugberkugurlu/ASPNETWebAPISamples/archive/master.zip";
using (HttpResponseMessage respOnse= await client.GetAsync(url, HttpCompletionOption.ResponseHeadersRead))
using (Stream streamToReadFrom = await response.Content.ReadAsStreamAsync())
{
string fileToWriteTo = Path.GetTempFileName();
using (Stream streamToWriteTo = File.Open(fileToWriteTo, FileMode.Create))
{
await streamToReadFrom.CopyToAsync(streamToWriteTo);
}
}
}
}

Notice that we are calling another overload of the GetAsync method by passing the HttpCompletionOptionenumeration value as ResponseHeadersRead. This switch tells the HttpClient not to buffer the response. In other words, it will just read the headers and return the control back. This means that the HttpContent is not ready at the time when you get the control back. Afterwards, we are getting the stream and calling the CopyToAsync method on it by passing our FileStream. The result is much better:

3

Resources

  • Streaming with New .NET HttpClient and HttpCompletionOption.ResponseHeadersRead
  • Async reading chunked content with HttpClient from ASP.NET WebApi

 

 

Over the last few days I’ve been struggling with an issue to capture HTTP content from arbitrary URLs and read only a specified number of bytes from the connection. Seems easy enough, but it turns out that if you want to control bandwidth and only read a small amount of partial data from the TCP/IP connection, that process is not easy to accomplish using the new HttpClient introduced in .NET 4.5, or even HttpWebRequest/Response (on which the new HttpClient is based) because the .NET stack automatically reads a fairly large chunk of data in the first request – presumably to capture the HTTP headers.

I’ll start this post by saying I didn’t find a full solution to this problem, but I’ll layout some of the discoveries I made in my quest for small byte counts on the wire some of which partially address the issue.

Why partial Requests? Why does this matter?

Here’s some background: I’m building a monitoring application that might be monitoring a huge number of URLs that get checked frequently for uptime. I’m talking about maybe 100,000 urls that get on average checked once every minute. As you might expect hitting that many URLs and retrieving the entire HTTP response, when all you need are a few bytes to verify the content would incur a tremendous amount of network traffic. Assuming a URL requested returned an average of 10k bytes of data, that would be 1 gig of data a minute. Yikes!

Using HttpClient with Partial Responses

So my goal was to try and read only a small chunk of data – say the first 1000 or 2000 bytes in which the user is allowed to search for content to match.

Using HttpClient you might do something like this:

[TestMethod]
public async Task HttpGetPartialDownloadTest()
{
//ServicePointManager.CertificatePolicy = delegate { return true; };

var httpclient = new HttpClient();
var respOnse= await httpclient.GetAsync("http://weblog.west-wind.com/posts/2012/Aug/21/An-Introduction-to-ASPNET-Web-API",
HttpCompletionOption.ResponseHeadersRead);

string text = null;

using (var stream = await response.Content.ReadAsStreamAsync())
{
var bytes = new byte[1000];
var bytesread = stream.Read(bytes, 0, 1000);
stream.Close();

text = Encoding.UTF8.GetString(bytes);
}

Assert.IsFalse(string.IsNullOrEmpty(text), "Text shouldn't be empty");
Assert.IsTrue(text.Length == 1000, "Text should hold 1000 characters");

Console.WriteLine(text);
}

This looks like it should do the trick, and indeed you get a result in this code that is 1000 characters long.

But not all is as it seem: While the .NET app gets its 1000 bytes, the data on the wire is actually much larger. If I use this code with a file that’s say 10k in size, I find that the entire response is actually travelling over the wire. If the file gets bigger (like the URL above which is a 110k article) the file gets truncated at around 20k or so – depending on how fast the connection is or how quickly the connection is closed.

I’m using WireShark to look at the TCP/IP trace to see the actual data captured  and it’s definitely way bigger than my 1000 bytes of data. So what’s happening here?

TCP/IP Buffering

After discussion with a few people more knowledgeable in network theory, I found out that the .NET HTTP client stack is caching TCP/IP traffic as it comes in. Normally this is exactly what you want – have the network connection read as much data as it can, as quickly as possible.  The more data that is read the more efficient the data retrieval in general.

But for my use case this unfortunately doesn’t work. I want just 1000 bytes (or as close as possible to that anyway) and then immediately close the connection. No matter how I tried this either with HttpClient or HttpWebRequest, I was unable to make the buffering go away.

Even using the new features in .NET 4.5 that supposedly allow turning off buffering to HttpWebRequest using AllowReadStreamBuffering=false didn’t work:

[TestMethod]
public async Task HttpWebRequestTest()
{
var request =
HttpWebRequest.Create("http://weblog.west-wind.com/posts/2012/Aug/21/An-Introduction-to-ASPNET-Web-API")
as HttpWebRequest;

request.AllowReadStreamBuffering = false;
request.AllowWriteStreamBuffering = false;

Stream stream;
byte[] buffer;
using (var respOnse= await request.GetResponseAsync() as HttpWebResponse)
{
stream = response.GetResponseStream();

buffer = new byte[1000];
int byteCount = await stream.ReadAsync(buffer, 0, buffer.Length);
request.Abort(); // call ASAP to kill connection
response.Close();
}
stream.Close();

string text = Encoding.UTF8.GetString(buffer);

Console.WriteLine(text);
}

Even running this code I get exactly 19,934 bytes of text from a response according to the Wireshark trace, which is not what I was hoping for.

Then I also tried an older application that uses WinInet doing a non-buffered read. There I also got buffering, although the buffer was roughtly 8k bytes which is the size of my HTTP buffer that I specify in the WinInet calls. Better but also not an option because WinInet is not reliable for many simultaneous connections.

TcpClient works better, but…

Several people suggested using TcpClient directly and it turns out that using raw TcpClient connections does give me a lot more control over the data travelling over the wire.

Using the following code I get a much more reasonable 3k data footprint:

[TestMethod]
public void TcpClient()
{

var server = "weblog.west-wind.com";
var pageName = "/posts/2012/Aug/21/An-Introduction-to-ASPNET-Web-API";
int byteCount = 1000;

const int port = 80;
TcpClient client = new TcpClient(server, port);

string fullRequest = "GET " + pageName + " HTTP/1.1\nHost: " + server + "\n\n";
byte[] outputData = System.Text.Encoding.ASCII.GetBytes(fullRequest);

NetworkStream stream = client.GetStream();
stream.Write(outputData, 0, outputData.Length);

byte[] inputData = new Byte[byteCount];

var actualByteCountRecieved = stream.Read(inputData, 0, byteCount);

// If you want the data as a string, set the function return type to a string
// return 'responseData' rather than 'inputData'
// and uncomment the next 2 lines
//string respOnseData= String.Empty;
string respOnseData= System.Text.Encoding.ASCII.GetString(inputData, 0, actualByteCountRecieved);

stream.Close();
client.Close();

Console.WriteLine(responseData);

It’s still bigger than the 1,000 bytes I’m requesting, but significantly smaller than anything I was able to get with any of the Windows HTTP clients.

Unfortunately, using TcpClient generically is not a good option for my use case. I need to hit generic URLs of all kinds and I really don’t want to re-implement a full Http client stack using TcpClient… Implementing SSL, authentication of all sorts, redirects, 100 continues etc. is not a trivial matter – especially SSL.

Why not use HEAD requests?

Http also supports HEAD requests, which retrieves only the HTTP headers. This is often ideal for monitoring situations as it doesn’t bring back any content at all.

Unfortunately in my scenario this is not going to work, at least not for everything. First I need to look at content to determine that the content – not just the headers – are valid. The other problem is that the target URL’s server has to support HEAD requests – not something that’s a given either.  ASP.NET and IIS’s default entries in web.config in the past didn’t include HEAD requests for handlers, which would make HEAD requests fail immediately.

So again, for generic URL access this isn’t going to work although it might be good for an option.

What about Range Headers?

HTTP 1.1 supports the concept of range headers, which allow for retrieving partial responses. It’s meant for large files and sending those files in chunks so that individual chunks can be re-loaded if a transmission is aborted. Ranges are easy to grab from the server by requesting a range.

A range request can look as simple as this:

GET http://west-wind.com/presentations/DotnetWebRequest/DotNetWebREquest.htm HTTP/1.1
Range: bytes=0-1000
Host: west-wind.com
Connection: Keep-Alive

Here I’m simply asking for the range of bytes between 0 and 1000. Normally you’re also suppose to send an etag – the normal flow goes: Call the page with a HEAD request, get the size and an ETAG, then start using Range request to chunk the data from the server. The server responds with a 206 Partial Response and only physically pushes down the requested number of bytes.

Using HttpClient this looks like this:

[TestMethod]
public async Task HttpClientGetStreamTest()
{
string url = "http://west-wind.com/presentations/DotnetWebRequest/DotNetWebREquest.htm";
int size = 1000;

using (var httpclient = new HttpClient())
{
httpclient.DefaultRequestHeaders.Range = new RangeHeaderValue(0, size);

var respOnse= await httpclient.GetAsync(url,HttpCompletionOption.ResponseHeadersRead);

using (var stream = await response.Content.ReadAsStreamAsync())
{
var bytes = new byte[size];
var bytesread = stream.Read(bytes, 0, bytes.Length);
stream.Close();
}
}
}

This works great – if the server supports this. The server and the request responding has to support it. Most modern Web servers support range requests natively so this works out of the box on static content. However, if content is dynamic it doesn’t work because the server generator code has to support it somehow. It works on the static HTML page I reference above, but it doesn’t work on the dynamic ASP.NET Web Log request I used in the earlier examples.

For my scenario I’m going to always add the range header in hopes that the server and link that I’m hitting support it, but chances are it doesn’t and the response will be a full response.

How to check Wire Traffic

Turns out checking what’s happening on the wire is not as trivial as you might think.

Fiddler – not a good idea

I love Fiddler and use it daily for all sorts of HTTP monitoring and testing. It’s an awesome tool, but for monitoring Wire Traffic size unfortunately it’s not well suited (I think – Eric Lawrence keeps making me realize with his nudges how little of Fiddler’s features I actually use or know about).

So initially when I wanted to see how much data was actually captured I went to Fiddler since it’s my go-to tool. But I quickly found out that no matter what I sent, Fiddler would always retrieve the entire HTTP response. Initially I just assumed that means that the HTTP client is reading the entire response, but that’s not actually the case. Fiddler is a proxy and as such retrieves requests on behalf of the client. You send an HTTP request, and Fiddler then retrieves it for you and feeds it back to your application. This means the entire response is retrieved (unless HTTP headers specify otherwise).

So, Fiddler doesn’t really help in tracking actual wire traffic.

.NET System.Net Tracing

.NET’s tracing system actually provides a ton of information regarding network operations. It tells you when it connects, reads, writes and closes connections and shows bytecounts etc. Unfortunately, it also shows some incorrect information when it comes to TCP/IP data on the wire and read through the actual interface.

To turn on Tracing for the ConsoleTraceListener:














name="MyTraceFile"
type="System.Diagnostics.TextWriterTraceListener"
initializeData="System.Net.trace.log"
/>








This works great for Tests which can directly display the console output in the test output.

One line in this trace in particular is a problem:

System.Net Information: 0 : [6708] ConnectStream#45653674::ConnectStream(Buffered 110109 bytes.)

Notice that it seems to indicate that the request buffered the entire content! It turns out that this line is actually bullshit – the connect stream is buffering, but it’s not buffering whatever that byte value is. The actual data on the wire ends up being only 19,934 so this line is definitely wrong.

Between this line and the lines that show the actual data read from the connection and the final count, the values that come from the system trace are not reliable for telling what actual network traffic was incurred.

WireShark

So, that led me back to using WireShark. WireShark is a great network packet level sniffer and it works great for these sorts of things. However, I use WireShark once a year or less so every time I fire it up I forget how to set up the filters to get only what I’m interested in. Basically you’ll want to filter requests only by Http traffic and then look through all the captured packets that have data which is tedious. But I can get the data that I need. From this I could tell that on the long 110k request I was not reading the entire response, but on smaller responses I was in fact getting the entire response.

Here’s what the trace looks like on the 110k request (using HttpWebRequest), which is reading ~19k of text:

WireShark 

BTW, here’s a cool tip: Did you know that you can take a WireShark pcap trace export and view it in Fiddler? It’s a much nicer way to look at Http requests, than inside of Wireshark.

To do this:

  • In WireShark select all packets capture
  • Go to File | Export | Export as .pcap file
  • Go into Fiddler
  • Go to File | Import Sessions | Packet Capture
  • Pick the .pcap file and see the requests in the browser

This may seem silly since you could capture directly in fiddler but remember that Fiddler is a proxy so it will pull data from the server then forward it.  By capturing with WireShark at the protocol level you can see what’s really happening on the wire and by importing into Fiddler you can see truncated requests.

Once imported into Fiddler, I can now see more easily what’s happening. The reconstructed trace in Fiddler from my test looks like this:

FiddlerTruncated

This is the WireShark imported trace. The response header shows the full content-length:

HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
Vary: Content-Encoding
Server: Microsoft-IIS/7.0
Date: Sat, 11 Jan 2014 00:30:41 GMT
Content-Length: 110061

but the actual content captured (up to to the highlighted nulls in the screen shot) is exactly 19,934k. Repeatedly. So this tells us the response is indeed getting truncated, but not immediately – there’s buffering of the HTTP stream.

However, if you look at a network trace, you’ll find that that the actual data that was sent is actually much larger. I chose this specific URL because it’s about 110k of text (yeah, a long article :-)). If you chose a smaller file that is say 10-20k in size you’ll find that the entire file was sent. Here with the 110k file I noticed that the actual data that came over the wire is about 20k. While 20k is a lot better than 110k, it’s still too much data to be on the wire when I’m only interested in the first 1000 bytes.

Where are we?

As I mentioned on the outset of this post – I haven’t found a complete solution to my problem at this point. There are a number of ideas to reduce the traffic in some situations, but none of them work for all cases.

I think moving forwardt the best option for this particular application likely will be to create a TCP/IP client and handle the ‘simple’ requests and turn on a byte count with some extra padding for the expected header size. Basically plain URL access without HTTPS, I can handle with the TCP/IP client. For HTTPS requests, Authentication, Redirects etc. then I have to live with the HttpClient/HttpWebRequest behavior and applying Range headers to everything to limit the data output from the server if it happens to be supported.

I’m hoping by posting here, somebody might have some additional ideas about how to limit the initial Http read buffer size for HttpWebRequest/HttpClient.

Resources

  • WireShark
  • Fiddler
  • My original StackOverflow Post from which this was compiled
    (thanks to Shawty and Darrel Miller for their help)

Other Posts you might also like

  • Adding minimal OWIN Identity Authentication to an Existing ASP.NET MVC Application
  • Publishing and Running ASP.NET Core Applications with IIS
  • Upgrading to .NET Core 2.0 Preview
  • Using JSON.NET for dynamic JSON parsing

(StreamReader.ReadLine()==null)还是(-1 != StreamReader.Peek())?

这也是微软提供的示例,在实际使用中发现,有时候,该方法的缺陷在于不能读取完整地读取文件所有行。怀疑是缓冲区过小。查MSDN说明:

 

StringReader      . ReadLine 方法将行定义为后面跟有下列符号的字符序列:换行符(“/n”)、回车符(“/r”)或后跟换行符的回车符(“/r/n”)。 所产生的字符串不包含终止回车符和/或换行符。 如果已到达基础字符串的结尾,则返回值为 null     。
http://msdn.microsoft.com/zh-cn/library/system.io.streamreader.readline.aspx

我的理解:如果由于编码的问题,导致读取异常,也就是无法读取行标志时,可能会认为已到文件结尾而中断下行的读取。这也解释了为什么会有时读取不完整的原因。

 

这里使用 StreamReader的Peek()方法,依据MSDN的说明,

 

Peek 方法返回一个整数值以便确定是否到达文件末尾,或发生其他错误。 这样一来,用户在将返回值强制转换为 Char 类型之前就可以首先检查该值是否为 -1。

换句话说,它不需要先转换字符,即可返回是否达到文件末尾。

http://msdn.microsoft.com/zh-cn/library/system.io.streamreader.peek.aspx 

 

 

   string respCOntent= string.Empty;

Encoding encode = Encoding.GetEncoding("gb2312");
try
{


using (HttpResponseMessage respOnse= await this.Client.GetAsync(url, HttpCompletionOption.ResponseHeadersRead))
using (Stream streamOfRespOnse= await response.Content.ReadAsStreamAsync())
{
//string fileToWriteTo = Path.GetTempFileName();
//File.Open(fileToWriteTo, FileMode.Create)
using (MemoryStream streamOfBuffer = new MemoryStream())
{
await streamOfResponse.CopyToAsync(streamOfBuffer);

//写入完缓冲后 ,游标设置到流的起始位置
streamOfBuffer.Seek(0, SeekOrigin.Begin);
var allBytes = streamOfBuffer.ToArray();

respCOntent= encode.GetString(allBytes);
//GZipStream gzip = new GZipStream(streamOfBuffer, CompressionMode.Decompress);//解压缩
//using (StreamReader reader = new StreamReader(gzip, Encoding.GetEncoding("gb2312")))//中文编码处理
//{
// respCOntent= reader.ReadToEnd();
//}

//StringBuilder sb = new StringBuilder();
//Byte[] buf = new byte[8192];

//string tmpString = null;
//int count = 0;
//do
//{
// count = streamOfBuffer.Read(buf, 0, buf.Length);
// if (count != 0)
// {
// tmpString = encode.GetString(buf, 0, count);
// sb.Append(tmpString);
// }
//} while (count > 0);


//respCOntent= sb.ToString();
}
}

}
catch (Exception ex)
{
throw ex;
}

  

   

推荐阅读
  • Java序列化对象传给PHP的方法及原理解析
    本文介绍了Java序列化对象传给PHP的方法及原理,包括Java对象传递的方式、序列化的方式、PHP中的序列化用法介绍、Java是否能反序列化PHP的数据、Java序列化的原理以及解决Java序列化中的问题。同时还解释了序列化的概念和作用,以及代码执行序列化所需要的权限。最后指出,序列化会将对象实例的所有字段都进行序列化,使得数据能够被表示为实例的序列化数据,但只有能够解释该格式的代码才能够确定数据的内容。 ... [详细]
  • Iamtryingtomakeaclassthatwillreadatextfileofnamesintoanarray,thenreturnthatarra ... [详细]
  • 个人学习使用:谨慎参考1Client类importcom.thoughtworks.gauge.Step;importcom.thoughtworks.gauge.T ... [详细]
  • 31.项目部署
    目录1一些概念1.1项目部署1.2WSGI1.3uWSGI1.4Nginx2安装环境与迁移项目2.1项目内容2.2项目配置2.2.1DEBUG2.2.2STAT ... [详细]
  • 网络请求模块选择——axios框架的基本使用和封装
    本文介绍了选择网络请求模块axios的原因,以及axios框架的基本使用和封装方法。包括发送并发请求的演示,全局配置的设置,创建axios实例的方法,拦截器的使用,以及如何封装和请求响应劫持等内容。 ... [详细]
  • Spring常用注解(绝对经典),全靠这份Java知识点PDF大全
    本文介绍了Spring常用注解和注入bean的注解,包括@Bean、@Autowired、@Inject等,同时提供了一个Java知识点PDF大全的资源链接。其中详细介绍了ColorFactoryBean的使用,以及@Autowired和@Inject的区别和用法。此外,还提到了@Required属性的配置和使用。 ... [详细]
  • 本文主要解析了Open judge C16H问题中涉及到的Magical Balls的快速幂和逆元算法,并给出了问题的解析和解决方法。详细介绍了问题的背景和规则,并给出了相应的算法解析和实现步骤。通过本文的解析,读者可以更好地理解和解决Open judge C16H问题中的Magical Balls部分。 ... [详细]
  • 本文讨论了使用差分约束系统求解House Man跳跃问题的思路与方法。给定一组不同高度,要求从最低点跳跃到最高点,每次跳跃的距离不超过D,并且不能改变给定的顺序。通过建立差分约束系统,将问题转化为图的建立和查询距离的问题。文章详细介绍了建立约束条件的方法,并使用SPFA算法判环并输出结果。同时还讨论了建边方向和跳跃顺序的关系。 ... [详细]
  • 安卓select模态框样式改变_微软Office风格的多端(Web、安卓、iOS)组件库——Fabric UI...
    介绍FabricUI是微软开源的一套Office风格的多端组件库,共有三套针对性的组件,分别适用于web、android以及iOS,Fab ... [详细]
  • 本文介绍了P1651题目的描述和要求,以及计算能搭建的塔的最大高度的方法。通过动态规划和状压技术,将问题转化为求解差值的问题,并定义了相应的状态。最终得出了计算最大高度的解法。 ... [详细]
  • 解决Cydia数据库错误:could not open file /var/lib/dpkg/status 的方法
    本文介绍了解决iOS系统中Cydia数据库错误的方法。通过使用苹果电脑上的Impactor工具和NewTerm软件,以及ifunbox工具和终端命令,可以解决该问题。具体步骤包括下载所需工具、连接手机到电脑、安装NewTerm、下载ifunbox并注册Dropbox账号、下载并解压lib.zip文件、将lib文件夹拖入Books文件夹中,并将lib文件夹拷贝到/var/目录下。以上方法适用于已经越狱且出现Cydia数据库错误的iPhone手机。 ... [详细]
  • Java验证码——kaptcha的使用配置及样式
    本文介绍了如何使用kaptcha库来实现Java验证码的配置和样式设置,包括pom.xml的依赖配置和web.xml中servlet的配置。 ... [详细]
  • Java自带的观察者模式及实现方法详解
    本文介绍了Java自带的观察者模式,包括Observer和Observable对象的定义和使用方法。通过添加观察者和设置内部标志位,当被观察者中的事件发生变化时,通知观察者对象并执行相应的操作。实现观察者模式非常简单,只需继承Observable类和实现Observer接口即可。详情请参考Java官方api文档。 ... [详细]
  • PDO MySQL
    PDOMySQL如果文章有成千上万篇,该怎样保存?数据保存有多种方式,比如单机文件、单机数据库(SQLite)、网络数据库(MySQL、MariaDB)等等。根据项目来选择,做We ... [详细]
  • 本文介绍了在MFC下利用C++和MFC的特性动态创建窗口的方法,包括继承现有的MFC类并加以改造、插入工具栏和状态栏对象的声明等。同时还提到了窗口销毁的处理方法。本文详细介绍了实现方法并给出了相关注意事项。 ... [详细]
author-avatar
你说的白是小白的白_958
这个家伙很懒,什么也没留下!
Tags | 热门标签
RankList | 热门文章
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有