how can i use cuda with nodejs

 ar_wen2402851455 发布于 2023-02-07 10:21

Cuda is Nivida provided api that lets c/c++ use gpu for some stuff, even though i don't what that some stuff is & would like to know, from what i saw the gains were remarkable. Also cuda only works for nivida gpus...

There does exist a module for nodejs, but it's only for 64bit version of windows, yet there exists cuda for 32bit version as well so only thing missing binding/extension for nodejs to cuda in c++. And There is no sign of documents anywhere on github or internet about that module. Last commits were like 1/2 year+ ago.

If it's all possible than it'd be very great. As nodejs would be able to use gpu for operations, putting it in the whole new level for web stuff, and other applications. Also given parallel nature of nodejs it fits perfectly with gpu's parallel nature.

Suppose there is no module that exists right now. What are my choices.

it's been done already by someone else: http://www.cs.cmu.edu/afs/cs/academic/class/15418-s12/www/competition/r2jitu.com/418/final_report.pdf

user2129444.. 10

这是一个binding.gyp文件,它将使用两个源文件hello.cpp,goodby.cu和goodby1.cu构建节点扩展.

{
  ## for windows, be sure to do node-gyp rebuild -msvs_version=2013, 
  ## and run under a msvs shell

  ## for all targets 
  'conditions': [
    [ 'OS=="win"', {'variables': {'obj': 'obj'}}, 
    {'variables': {'obj': 'o'}}]],

  "targets": [
{
 "target_name": "hello",
 "sources": [ "hello.cpp", "goodby.cu", "goodby1.cu",], 

 'rules': [{
     'extension': 'cu',           
     'inputs': ['<(RULE_INPUT_PATH)'],
     'outputs':[ '<(INTERMEDIATE_DIR)/<(RULE_INPUT_ROOT).<(obj)'],
     'conditions': [
      [ 'OS=="win"',  
        {'rule_name': 'cuda on windows',
         'message': "compile cuda file on windows",
         'process_outputs_as_sources': 0,
         'action': ['nvcc -c <(_inputs) -o  <(_outputs)'],
         }, 
       {'rule_name': 'cuda on linux',
         'message': "compile cuda file on linux",
         'process_outputs_as_sources': 1,
         'action': ['nvcc','-Xcompiler','-fpic','-c',
            '<@(_inputs)','-o','<@(_outputs)'],
    }]]}],

   'conditions': [
    [ 'OS=="mac"', {
      'libraries': ['-framework CUDA'],
      'include_dirs': ['/usr/local/include'],
      'library_dirs': ['/usr/local/lib'],
    }],
    [ 'OS=="linux"', {
      'libraries': ['-lcuda', '-lcudart'],
      'include_dirs': ['/usr/local/include'],
      'library_dirs': ['/usr/local/lib', '/usr/local/cuda/lib64'],
    }],
    [ 'OS=="win"', {
      'conditions': [
        ['target_arch=="x64"',
          {
            'variables': { 'arch': 'x64' }
          }, {
            'variables': { 'arch': 'Win32' }
          }
        ],
      ],
      'variables': {
        'cuda_root%': '$(CUDA_PATH)'
      },
      'libraries': [
        '-l<(cuda_root)/lib/<(arch)/cuda.lib',
        '-l<(cuda_root)/lib/<(arch)/cudart.lib',
      ],
      "include_dirs": [
        "<(cuda_root)/include",
      ],
    }, {
      "include_dirs": [
        "/usr/local/cuda/include"
      ],
    }]
  ]
}
]
}


Julian de Bh.. 9

连接CUDA和Node.js的最自然方式是通过" addon ",它允许您将c ++代码公开给在节点上运行的javascript程序.

Node本身是一个构建在v8 javascript引擎之上的c ++应用程序,addons是一种编写c ++库的方式,javascript库可以使用它们,就像节点自己的库一样.

从外面看,插件看起来就像一个模块.c ++被编译成动态库,然后像任何其他模块一样暴露给节点.例如my-addon.cc - >(编译) - > my-addon.dylib - >(node-gyp) - > my-addon.node - >var myFoo = require('my-addon').foo()

从插件内部,您可以使用v8和Node API与Jav​​ascript环境进行交互,并使用普通的c ++ API访问CUDA.

在这个级别上有许多活动部件.简单到将值从一个传递给另一个就意味着当你将javascript值包装/解包到适当的c ++类型时,你需要担心c ++内存管理和javascript垃圾收集器.

好消息是,大多数问题都很好,很好的文档和支持库很多,例如,nan会立即运行一个骨架插件,而在CUDA方面,你正在讨论他们正常的c ++界面,有卡车装载的文档和教程.

3 个回答
  • 这是一个binding.gyp文件,它将使用两个源文件hello.cpp,goodby.cu和goodby1.cu构建节点扩展.

    {
      ## for windows, be sure to do node-gyp rebuild -msvs_version=2013, 
      ## and run under a msvs shell
    
      ## for all targets 
      'conditions': [
        [ 'OS=="win"', {'variables': {'obj': 'obj'}}, 
        {'variables': {'obj': 'o'}}]],
    
      "targets": [
    {
     "target_name": "hello",
     "sources": [ "hello.cpp", "goodby.cu", "goodby1.cu",], 
    
     'rules': [{
         'extension': 'cu',           
         'inputs': ['<(RULE_INPUT_PATH)'],
         'outputs':[ '<(INTERMEDIATE_DIR)/<(RULE_INPUT_ROOT).<(obj)'],
         'conditions': [
          [ 'OS=="win"',  
            {'rule_name': 'cuda on windows',
             'message': "compile cuda file on windows",
             'process_outputs_as_sources': 0,
             'action': ['nvcc -c <(_inputs) -o  <(_outputs)'],
             }, 
           {'rule_name': 'cuda on linux',
             'message': "compile cuda file on linux",
             'process_outputs_as_sources': 1,
             'action': ['nvcc','-Xcompiler','-fpic','-c',
                '<@(_inputs)','-o','<@(_outputs)'],
        }]]}],
    
       'conditions': [
        [ 'OS=="mac"', {
          'libraries': ['-framework CUDA'],
          'include_dirs': ['/usr/local/include'],
          'library_dirs': ['/usr/local/lib'],
        }],
        [ 'OS=="linux"', {
          'libraries': ['-lcuda', '-lcudart'],
          'include_dirs': ['/usr/local/include'],
          'library_dirs': ['/usr/local/lib', '/usr/local/cuda/lib64'],
        }],
        [ 'OS=="win"', {
          'conditions': [
            ['target_arch=="x64"',
              {
                'variables': { 'arch': 'x64' }
              }, {
                'variables': { 'arch': 'Win32' }
              }
            ],
          ],
          'variables': {
            'cuda_root%': '$(CUDA_PATH)'
          },
          'libraries': [
            '-l<(cuda_root)/lib/<(arch)/cuda.lib',
            '-l<(cuda_root)/lib/<(arch)/cudart.lib',
          ],
          "include_dirs": [
            "<(cuda_root)/include",
          ],
        }, {
          "include_dirs": [
            "/usr/local/cuda/include"
          ],
        }]
      ]
    }
    ]
    }
    

    2023-02-07 10:24 回答
  • 正确的方法是使用Nvidia CUDA工具包在C++中编写cuda应用程序,然后将其作为一个独立的进程从节点调用.通过这种方式,您可以充分利用CUDA并利用节点的强大功能来控制该过程.

    例如,如果您有一个cuda应用程序并且想要将其扩展到32台计算机,则可以使用快速C或C++编写应用程序,然后使用node将其推送到集群中的所有PC并处理与通过网络的每个远程进程.节点在这个区域闪耀.每个CUDA应用程序实例完成它的工作后,您将所有数据与节点连接并将其呈现给用户.

    2023-02-07 10:24 回答
  • 连接CUDA和Node.js的最自然方式是通过" addon ",它允许您将c ++代码公开给在节点上运行的javascript程序.

    Node本身是一个构建在v8 javascript引擎之上的c ++应用程序,addons是一种编写c ++库的方式,javascript库可以使用它们,就像节点自己的库一样.

    从外面看,插件看起来就像一个模块.c ++被编译成动态库,然后像任何其他模块一样暴露给节点.例如my-addon.cc - >(编译) - > my-addon.dylib - >(node-gyp) - > my-addon.node - >var myFoo = require('my-addon').foo()

    从插件内部,您可以使用v8和Node API与Jav​​ascript环境进行交互,并使用普通的c ++ API访问CUDA.

    在这个级别上有许多活动部件.简单到将值从一个传递给另一个就意味着当你将javascript值包装/解包到适当的c ++类型时,你需要担心c ++内存管理和javascript垃圾收集器.

    好消息是,大多数问题都很好,很好的文档和支持库很多,例如,nan会立即运行一个骨架插件,而在CUDA方面,你正在讨论他们正常的c ++界面,有卡车装载的文档和教程.

    2023-02-07 10:25 回答
撰写答案
今天,你开发时遇到什么问题呢?
立即提问
热门标签
PHP1.CN | 中国最专业的PHP中文社区 | PNG素材下载 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有