从json对象创建pandas数据帧

 无声胜有剩 发布于 2023-02-09 18:29

我终于从一个包含许多json对象的文件中输出了我需要的数据,但是我需要一些帮助,将下面的输出转换为单个数据帧,因为它循环遍历数据.以下是生成输出的代码,包括输出结果的示例:

原始数据:

{
"zipcode":"08989",
"current"{"canwc":null,"cig":4900,"class":"observation","clds":"OVC","day_ind":"D","dewpt":19,"expireTimeGMT":1385486700,"feels_like":34,"gust":null,"hi":37,"humidex":null,"icon_code":26,"icon_extd":2600,"max_temp":37,"wxMan":"wx1111"},
"triggers":[53,31,9,21,48,7,40,178,55,179,176,26,103,175,33,51,20,57,112,30,50,113]
}
{
"zipcode":"08990",
"current":{"canwc":null,"cig":4900,"class":"observation","clds":"OVC","day_ind":"D","dewpt":19,"expireTimeGMT":1385486700,"feels_like":34,"gust":null,"hi":37,"humidex":null,"icon_code":26,"icon_extd":2600,"max_temp":37, "wxMan":"wx1111"},
"triggers":[53,31,9,21,48,7,40,178,55,179,176,26,103,175,33,51,20,57,112,30,50,113]
}

def lines_per_n(f, n):
    for line in f:
        yield ''.join(chain([line], itertools.islice(f, n - 1)))

for fin in glob.glob('*.txt'):
    with open(fin) as f:
        for chunk in lines_per_n(f, 5):
            try:
                jfile = json.loads(chunk)
                zipcode = jfile['zipcode']
                datetime = jfile['current']['proc_time']
                triggers = jfile['triggers']
                print pd.Series(jfile['zipcode']), 
                      pd.Series(jfile['current']['proc_time']),\
                      jfile['triggers']          
            except ValueError, e:
                pass
            else:
                pass

我运行上面的示例输出,我想将其存储在pandas数据帧中作为3列.

08988 20131126102946 []
08989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179]
08988 20131126102946 []
08989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179]
00544 20131126102946 [178, 30, 176, 103, 179, 112, 21, 20, 48]

所以下面的代码看起来更接近了,如果我在列表中传递并转置df,它会给我一个时髦的df.关于如何正确地重塑这个问题的任何想法?

def series_chunk(chunk):
    jfile = json.loads(chunk)
    zipcode = jfile['zipcode']
    datetime = jfile['current']['proc_time']
    triggers = jfile['triggers']
    return jfile['zipcode'],\
            jfile['current']['proc_time'],\
            jfile['triggers']

for fin in glob.glob('*.txt'):
    with open(fin) as f:
        for chunk in lines_per_n(f, 7):
            df1 = pd.DataFrame(list(series_chunk(chunk)))
            print df1.T

[u'08988', u'20131126102946', []]
[u'08989', u'20131126102946', [53, 31, 9, 21, 48, 7, 40, 178, 55, 179]]
[u'08988', u'20131126102946', []]
[u'08989', u'20131126102946', [53, 31, 9, 21, 48, 7, 40, 178, 55, 179]]

数据帧:

   0               1   2
0  08988  20131126102946  []
       0               1                                                  2
0  08989  20131126102946  [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ...
       0               1   2
0  08988  20131126102946  []
       0               1                                                  2
0  08989  20131126102946  [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ...

这是我的最终代码和输出.如何捕获它通过循环创建的每个数据帧并将它们作为一个数据帧对象动态连接?

for fin in glob.glob('*.txt'):
    with open(fin) as f:
        print pd.concat([series_chunk(chunk) for chunk in lines_per_n(f, 7)], axis=1).T

       0               1                                                  2
0  08988  20131126102946                                                 []
1  08989  20131126102946  [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ...
       0               1                                                  2
0  08988  20131126102946                                                 []
1  08989  20131126102946  [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ...

Andy Hayden.. 21

注意:对于那些到达这个问题的人想要将json解析为pandas,如果你确实有有效的 json(这个问题没有)那么你应该使用pandas read_json函数:

# can either pass string of the json, or a filepath to a file with valid json
In [99]: pd.read_json('[{"A": 1, "B": 2}, {"A": 3, "B": 4}]')
Out[99]:
   A  B
0  1  2
1  3  4

查看几个示例的文档的IO部分,可以传递给此函数的参数,以及规范化结构较少的json的方法.

如果你没有有效的json,那么在以json读入之前使用字符串通常是有效的,例如看到这个答案.

如果你有几个json文件,你应该将DataFrame连接在一起(类似于这个答案):

pd.concat([pd.read_json(file) for file in ...], ignore_index=True)

这个例子的原始答案:

在正则表达式中使用lookbehind作为传递给read_csv的分隔符:

In [11]: df = pd.read_csv('foo.csv', sep='(?

正如评论中所提到的,你可以通过将几个系列连在一起来更直接地做到这一点......它也会更容易理解:

def series_chunk(chunk):
    jfile = json.loads(chunk)
    zipcode = jfile['zipcode']
    datetime = jfile['current']['proc_time']
    triggers = jfile['triggers']
    return pd.Series([jfile['zipcode'], jfile['current']['proc_time'], jfile['triggers']])

dfs = []
for fin in glob.glob('*.txt'):
    with open(fin) as f:
        df = pd.concat([series_chunk(chunk) for chunk in lines_per_n(f, 5)], axis=1)
        dfs.append(dfs)

df = pd.concat(dfs, ignore_index=True)

注意:您也可以将try/except移动到series_chunk.

1 个回答
  • 注意:对于那些到达这个问题的人想要将json解析为pandas,如果你确实有有效的 json(这个问题没有)那么你应该使用pandas read_json函数:

    # can either pass string of the json, or a filepath to a file with valid json
    In [99]: pd.read_json('[{"A": 1, "B": 2}, {"A": 3, "B": 4}]')
    Out[99]:
       A  B
    0  1  2
    1  3  4
    

    查看几个示例的文档的IO部分,可以传递给此函数的参数,以及规范化结构较少的json的方法.

    如果你没有有效的json,那么在以json读入之前使用字符串通常是有效的,例如看到这个答案.

    如果你有几个json文件,你应该将DataFrame连接在一起(类似于这个答案):

    pd.concat([pd.read_json(file) for file in ...], ignore_index=True)
    

    这个例子的原始答案:

    在正则表达式中使用lookbehind作为传递给read_csv的分隔符:

    In [11]: df = pd.read_csv('foo.csv', sep='(?<!,)\s', header=None)
    
    In [12]: df
    Out[12]: 
           0               1                                                  2
    0   8988  20131126102946                                                 []
    1   8989  20131126102946  [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ...
    2   8988  20131126102946                                                 []
    3   8989  20131126102946  [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ...
    4    544  20131126102946  [178, 30, 176, 103, 179, 112, 21, 20, 48, 7, 5...
    5    601  20131126094911                                                 []
    6    602  20131126101056                                                 []
    7    603  20131126101056                                                 []
    8    604  20131126101056                                                 []
    9    544  20131126102946  [178, 30, 176, 103, 179, 112, 21, 20, 48, 7, 5...
    10   601  20131126094911                                                 []
    11   602  20131126101056                                                 []
    12   603  20131126101056                                                 []
    13   604  20131126101056                                                 []
    
    [14 rows x 3 columns]
    

    正如评论中所提到的,你可以通过将几个系列连在一起来更直接地做到这一点......它也会更容易理解:

    def series_chunk(chunk):
        jfile = json.loads(chunk)
        zipcode = jfile['zipcode']
        datetime = jfile['current']['proc_time']
        triggers = jfile['triggers']
        return pd.Series([jfile['zipcode'], jfile['current']['proc_time'], jfile['triggers']])
    
    dfs = []
    for fin in glob.glob('*.txt'):
        with open(fin) as f:
            df = pd.concat([series_chunk(chunk) for chunk in lines_per_n(f, 5)], axis=1)
            dfs.append(dfs)
    
    df = pd.concat(dfs, ignore_index=True)
    

    注意:您也可以将try/except移动到series_chunk.

    2023-02-09 18:32 回答
撰写答案
今天,你开发时遇到什么问题呢?
立即提问
热门标签
PHP1.CN | 中国最专业的PHP中文社区 | PNG素材下载 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有