You can now use the stereo_image_proc/point_cloud2 nodelet to reconstruct the 3D pointcloud. I tested the nodelet with new /multisense/left/disparity_image topic, and confirmed that the resulting pointcloud matched /multisense/image_point2. It is worth noting, that because the MultiSense publishes topic on a on-demand basis, you must subscribe to the /mulisense/right/image_rect for the /multisense/right/camera_info topic to be published. Additionally, the current stereo_image_proc does not support pointcloud reconstructions from cameras with non-square pixels. I created a pull request to solve this issue (https://github.com/ros-perception/vision_opencv/pull/52), but it is not been accepted yet. Pointclouds generated by stereo_image_proc at resolutions of 1024x272, 2048x544, 1024x512, and 2048x1024 will be incorrect.
As I previously mentioned you will need to create an active subscription to /multisense/right/image_rect so that the /multisense/right/image_rect/camera_info topic will be published. Otherwise you should see a pointcloud2 being published on the /points2 topic.
With regard to size, stereo_image_proc will beat /multisense/organized_image_points2_color in terms of the amount of data transmitted via ROS topics. You can confirm this with a quick back of the napkin computation. For stereo_image_proc, with a sensor running at 0.5MP mode, we need 1 RGB image which will be ~0.5M * 3 bytes and a floating point /sensor_msgs/DisparityImage which will be ~0.5M * 4 bytes. This means there are 3.5 M bytes of data being transmitted each frame from the MultiSense ROS driver. For the organized point cloud the total size per cloud is ~0.5M * 16 bytes: 4 bytes each for x, y, z and 4 bytes for RGB. It is important to note that the /sensor_msgs/DisparityImage message is a depth image representation which ROS is trying to move away from (http://www.ros.org/reps/rep-0118.html). If the amount of data transmitted via ROS topics is your main concern, it would best just to subscribe to a /multisense/left/disparity image and reproject it to a point cloud in your node. This would only use ~0.5M * 2 bytes per image, and will avoid having to use nodelets. You can also skip the colorization step, if you want, to keep the size of the transmitted data as small as possible.
I was able to reproduce your errors. Looking at the nodelet manager output it appears the nodelet is doing some internal remapping of the camera_info topic. Specifically, it appears be internally remapping camera_info to /multisense/camera_info. Changing the nodelet launch command to the following appears to solve the problem.