FORUMS
Remove All Ads from XDA

Build kernel from source and boot to Ubuntu using L4T (Linux for Tegra) rootfs

386 posts
Thanks Meter: 83
 
By yahoo2016, Senior Member on 19th December 2015, 08:47 PM
Post Reply Email Thread
Update (08/26/2017) Update:

I updated my SATV Pro 2015 to ROM 5.x, I did not have to replace dtb file. The img files from link below seem still working.


The following have been tested for SATV ROM 3.x.

Latest boot images based on L4T24.2 for external SD card, USB drive, internal EMMC and internal HDD can be downloaded from:
https://drive.google.com/file/d/0Bz5...ew?usp=sharing

Where,
"mmcblk1p1.img" is for booting to rootfs on external SD card.
"sda1.img" is for booting to rootfs on external USB drive (or SD card in USB adapter), or internal SATA HDD of modified 16GB SATV.
"mmcblk0p29.img" is for booting to rootfs on partition 29 (User Data) of internal eMMC of 16GB SATV if only Ubuntu is needed.
"mmcblk0p1.img" is for boot to rootfs on partition 1 of internal eMMC of SATV Pro.
"sda32.img" is for booting to rootfs on partition 32 (User Data) of HDD of 500GB SATV Pro if only Ubuntu is needed.
"sda33.img" is for booting to rootfs on partition 33 of HDD of 500GB SATV Pro for Ubuntu (modification of HDD partition table is needed).
"sda34.img" is for booting to rootfs on partition 34 of HDD of 500GB SATV Pro for Ubuntu (modification of HDD partition table is needed).


You will need to download L4T24.2 driver package and rootfs from Nvidia (https://developer.nvidia.com/embedded/linux-tegra), apply binary drivers to rootfs and copy it to external SD card/USB drive, see this post:

http://forum.xda-developers.com/show...&postcount=421

or internal eMMC or HDD (of SATV Pro):
http://forum.xda-developers.com/show...&postcount=422

L4T24.2 dtb from L4T24.2 rootfs "/boot" needs to be flashed. To find dtb name for your SATV, type the following in recovery mode:
sudo fastboot oem dtbname

Flash the wrong dtb will likely brick the SATV!!!

To have Wifi working, "/lib/firmware/brcm/fw_bcmdhd.bin" need to be replaced with the one from SATV (in /system/vendor/firmware/bcm4354).

To build kernel from source, download the latest L4T kernel source, modify tegra21_defconfig by adding "CONFIG_ANDROID=y", example configuration file can be downloaded from the following link

https://forum.xda-developers.com/sho...&postcount=417
The Following 16 Users Say Thank You to yahoo2016 For This Useful Post: [ View ] Gift yahoo2016 Ad-Free
 
 
19th December 2015, 10:01 PM |#2  
Senior Member
Flag Pittsburgh
Thanks Meter: 353
 
Donate to Me
More
Thanks for this. I'll see if I can get something booting. Very much appreciated. Glad to see some real dev going into these boxes lately. Not discrediting all the amazing devs so far, just commenting the interest is really ramping up!
The Following User Says Thank You to kdb424 For This Useful Post: [ View ] Gift kdb424 Ad-Free
20th December 2015, 10:44 AM |#3  
Senior Member
Thanks Meter: 394
 
More
thanks for the post, i was reading some things, we do have the issue with the device trees for full linux sort of. found a rather good explanation for dtbs here https://events.linuxfoundation.org/s...ee-dummies.pdf if anyone is interested. i was now checking in the cm12.1 kernel under arch/arm/boot/dts and saw there are many foster dts, to build dtbs at kernel compile time. Do we need to use tegra124-loki-foster.dts or tegra124-loki-foster-p2530-0900-c00-00.dts for the 16 gb shield, i know for the pro there is tegra124-loki-fosterhdd-p2530-0900-c00-00.dts
btw i have the l4t with cuda and your prebuilt kernel working and am running some testes with a specialized cuda program, it appears it can do around 330Gflops on the gpu (assuming the maximum of 512gflops it appears already to be good performance), second thing, comparing the single and quad core performance it has a speedup of 3.3 going from 1 to 4 cores, while going from 32bit l4t to 64bit ubuntu from the other thread here, the 64 bit one is around 13% faster both in single and quadcore computation (obviously cant compare cuda speed). on the 64 bit version it needs 3 minutes to compute something that a 2600k needs 1 minute. so the 2600k appears to be for this use approx (only) 3 times faster
20th December 2015, 11:21 AM |#4  
OP Senior Member
Thanks Meter: 83
 
More
Quote:
Originally Posted by crnkoj

thanks for the post, i was reading some things, we do have the issue with the device trees for full linux sort of. found a rather good explanation for dtbs here https://events.linuxfoundation.org/s...ee-dummies.pdf if anyone is interested. i was now checking in the cm12.1 kernel under arch/arm/boot/dts and saw there are many foster dts, to build dtbs at kernel compile time. Do we need to use tegra124-loki-foster.dts or tegra124-loki-foster-p2530-0900-c00-00.dts for the 16 gb shield, i know for the pro there is tegra124-loki-fosterhdd-p2530-0900-c00-00.dts
btw i have the l4t with cuda and your prebuilt kernel working and am running some testes with a specialized cuda program, it appears it can do around 330Gflops on the gpu (assuming the maximum of 512gflops it appears already to be good performance), second thing, comparing the single and quad core performance it has a speedup of 3.3 going from 1 to 4 cores, while going from 32bit l4t to 64bit ubuntu from the other thread here, the 64 bit one is around 13% faster both in single and quadcore computation (obviously cant compare cuda speed). on the 64 bit version it needs 3 minutes to compute something that a 2600k needs 1 minute. so the 2600k appears to be for this use approx (only) 3 times faster

I'm very interested in Cuda testing on Shield TV, I posted on Nvidia TX1 board:

https://devtalk.nvidia.com/default/t...nd-benchmarks/

I thought Nvidia advertised Tegra X1 maximum of 1TFlops.

How did you test 64 bit version?

Which Cuda tests did you run?
20th December 2015, 11:41 AM |#5  
Senior Member
Thanks Meter: 394
 
More
Quote:
Originally Posted by yahoo2016

I'm very interested in Cuda testing on Shield TV, I posted on Nvidia TX1 board:

https://devtalk.nvidia.com/default/t...nd-benchmarks/

I thought Nvidia advertised Tegra X1 maximum of 1TFlops.

How did you test 64 bit version?

Which Cuda tests did you run?

firstly to the perfromance it advertises 512gflops/s single precision fp32 and 1tflops/s half precision fp16, you can check here
http://www.anandtech.com/show/8811/n...a-x1-preview/2
i think its just marketing with 1Tflop...
secondly i think you misunderstood me, maybe my post was a bit confusing
i managed to test CUDA on the L4T 32 bit, but not on the 64 bit Ubuntu.
the 330 GFLOPs were on L4T 32 bit using a program called CHARMM for molecular dynamics (actually a friend tested it through ssh). That was just a short test to see the peak.
We tested the CPU performance with the same program, this time both on 32 bit L4T and 64 bit Ubuntu, there 64 bit Ubuntu was 13% faster, both single threaded and 4 threaded (comparing 1 cpu to 4 cpus/whole SOC), and the speedup multiplier between 1 and 4 threads was 3,3, which is better than that on a 2600k (it has 3,2). The difference in speed in this CPU only test was however comparing the 2600k and the shield on maximum performance (eg shield 4 threads, i think 2600k he tried with ht turned on so 8 threads) was of a factor of 3, 2600k = 1 minute, shield = 3 minutes. CPU only speed.

We are now testing with more demanding parameters so it takes longer, so we can compare the time it takes on the shield on the GPU/CUDA to the time it takes on x86 with a dedicated gfx (660 gtx or 960 gtx), with more demanding/longer parameters because the early setup stage is too long to compare the calculations time for the GPU part otherwise.
Thats why i asked you for the way of building the kernel so i could build my own to have the system on an usb drive. furthermore it would be interesting to be able to boot with a proper linux and not android dtb, but as far as i understood @Steel01 one should not change the dtb on the partition, as otherwise the bootloader fails to start and we have a brick... Thats why the kexec method or alternatively the append dtb to kernel method would be better, but than wed need to reduce the size of the kernel/initramfs so that we could append the dtb to it, again needing another kernel/initramfs. After that id probably try with proper 64 bit linux and try to perhaps get 32 bit cuda to work on it. or alternatively being stuck waiting for nvidia to release 64 bit l4t/cuda/drivers binaries, because i dont believe that 32 bit nvidia drivers are going to work on a 64 bit linxu system (eventually perhaps as multilib, but thats even more complicated on arm as on x64)
The Following User Says Thank You to crnkoj For This Useful Post: [ View ] Gift crnkoj Ad-Free
20th December 2015, 12:14 PM |#6  
OP Senior Member
Thanks Meter: 83
 
More
Quote:
Originally Posted by crnkoj

firstly to the perfromance it advertises 512gflops/s single precision fp32 and 1tflops/s half precision fp16, you can check here
http://www.anandtech.com/show/8811/n...a-x1-preview/2
i think its just marketing with 1Tflop...
secondly i think you misunderstood me, maybe my post was a bit confusing
i managed to test CUDA on the L4T 32 bit, but not on the 64 bit Ubuntu.
the 330 GFLOPs were on L4T 32 bit using a program called CHARMM for molecular dynamics (actually a friend tested it through ssh). That was just a short test to see the peak.
We tested the CPU performance with the same program, this time both on 32 bit L4T and 64 bit Ubuntu, there 64 bit Ubuntu was 13% faster, both single threaded and 4 threaded (comparing 1 cpu to 4 cpus/whole SOC), and the speedup multiplier between 1 and 4 threads was 3,3, which is better than that on a 2600k (it has 3,2). The difference in speed in this CPU only test was however comparing the 2600k and the shield on maximum performance (eg shield 4 threads, i think 2600k he tried with ht turned on so 8 threads) was of a factor of 3, 2600k = 1 minute, shield = 3 minutes. CPU only speed.

We are now testing with more demanding parameters so it takes longer, so we can compare the time it takes on the shield on the GPU/CUDA to the time it takes on x86 with a dedicated gfx (660 gtx or 960 gtx), with more demanding/longer parameters because the early setup stage is too long to compare the calculations time for the GPU part otherwise.
Thats why i asked you for the way of building the kernel so i could build my own to have the system on an usb drive. furthermore it would be interesting to be able to boot with a proper linux and not android dtb, but as far as i understood @Steel01 one should not change the dtb on the partition, as otherwise the bootloader fails to start and we have a brick... Thats why the kexec method or alternatively the append dtb to kernel method would be better, but than wed need to reduce the size of the kernel/initramfs so that we could append the dtb to it, again needing another kernel/initramfs. After that id probably try with proper 64 bit linux and try to perhaps get 32 bit cuda to work on it. or alternatively being stuck waiting for nvidia to release 64 bit l4t/cuda/drivers binaries, because i dont believe that 32 bit nvidia drivers are going to work on a 64 bit linxu system (eventually perhaps as multilib, but thats even more complicated on arm as on x64)

Thanks for clarifications. I also noticed dtd issue, the normal linux kernel build involved the following steps:
Code:
make O=$TEGRA_KERNEL_OUT dtbs
make O=$TEGRA_KERNEL_OUT modules
make O=$TEGRA_KERNEL_OUT modules_install INSTALL_MOD_PATH=<your_destination>
I installed the outputs to my L4T rootfs, but the Android boot loader/kernel won't use them.

It's possible but more dangerous to repartition internal emmc to fix larger kernels That was done for chroubuntu on Chromebook CB5.
20th December 2015, 01:58 PM |#7  
Senior Member
Thanks Meter: 394
 
More
Quote:
Originally Posted by yahoo2016

Thanks for clarifications. I also noticed dtd issue, the normal linux kernel build involved the following steps:

Code:
make O=$TEGRA_KERNEL_OUT dtbs
make O=$TEGRA_KERNEL_OUT modules
make O=$TEGRA_KERNEL_OUT modules_install INSTALL_MOD_PATH=<your_destination>
I installed the outputs to my L4T rootfs, but the Android boot loader/kernel won't use them.

It's possible but more dangerous to repartition internal emmc to fix larger kernels That was done for chroubuntu on Chromebook CB5.

to be perfectly honest, im not really thrilled to repartition the emmc... would kind of like to use normal android from the forums (zulu99 build) on it as well.
about the dtb, one can append it to the kernel https://events.linuxfoundation.org/s...ee-dummies.pdf like its outlined here. im just not sure can we do that since the dtb is already loaded by bootloader? about the size of the boot. img than, we dont need all the modules in the initramfs. we barely need the working minimum (base board functions, usb, ethernet), the rest can be just modules in the rootfs, hence the size of the initramfs would shrink.
Btw did you use the prebuilt initramfs or did oyu do it yourself. Lately i had some experience doing the initramfs for an atom tablet 2 in 1 on arch linux (took me some time to figure it out, but eventually i managed to do it with the mkinitcpio script). in all honesty i think easiest would be to just install arch linux 64 bit on it and natively compile all the kernels/initrams on it.

edit:
one more thing, when running the cuda testing, i see that the GPU frequency dinamically changes depending on the gpu usage, with higher usage the frequency goes down and vice versa. I guess it has some kind of TDP limit either in the kernel/drivers/dtb, which might be interesting to fiddle, as the thing certainly isnt thermally limited in the case.
For instance, using tegrastats:
Code:
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [0%,0%,0%,0%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [26%,24%,35%,25%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 91%@998 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [30%,27%,43%,31%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [24%,27%,29%,29%]@921 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@998 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [28%,26%,41%,30%]@1530 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 0%@998 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [27%,29%,30%,35%]@2014 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [25%,24%,32%,26%]@825 EMC 23%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [25%,26%,27%,32%]@825 EMC 23%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [23%,23%,39%,27%]@921 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@998 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [30%,28%,45%,28%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@998 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [27%,24%,38%,25%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 4%@998 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [32%,27%,35%,38%]@2014 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [25%,25%,31%,29%]@825 EMC 23%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [22%,22%,27%,33%]@825 EMC 23%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [26%,33%,26%,30%]@921 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@998 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [29%,32%,28%,38%]@2014 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 0%@998 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [27%,26%,41%,26%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
edit 2:
oh i just noticed the script here https://devtalk.nvidia.com/default/t...nd-benchmarks/ where you posted, does this script work on our shield as well?

edit3:
k used the script, now cpu is at 2014mhz constantly, emc 1600 mhz constantly and gpu at 998 mhz constantly, the temp did in fact increase rather much lol, had to set the fan to a bit more, but 255 is too loud ^ ^
20th December 2015, 03:04 PM |#8  
OP Senior Member
Thanks Meter: 83
 
More
Quote:
Originally Posted by crnkoj

to be perfectly honest, im not really thrilled to repartition the emmc... would kind of like to use normal android from the forums (zulu99 build) on it as well.
about the dtb, one can append it to the kernel https://events.linuxfoundation.org/s...ee-dummies.pdf like its outlined here. im just not sure can we do that since the dtb is already loaded by bootloader? about the size of the boot. img than, we dont need all the modules in the initramfs. we barely need the working minimum (base board functions, usb, ethernet), the rest can be just modules in the rootfs, hence the size of the initramfs would shrink.
Btw did you use the prebuilt initramfs or did oyu do it yourself. Lately i had some experience doing the initramfs for an atom tablet 2 in 1 on arch linux (took me some time to figure it out, but eventually i managed to do it with the mkinitcpio script). in all honesty i think easiest would be to just install arch linux 64 bit on it and natively compile all the kernels/initrams on it.

edit:
one more thing, when running the cuda testing, i see that the GPU frequency dinamically changes depending on the gpu usage, with higher usage the frequency goes down and vice versa. I guess it has some kind of TDP limit either in the kernel/drivers/dtb, which might be interesting to fiddle, as the thing certainly isnt thermally limited in the case.
For instance, using tegrastats:

Code:
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [0%,0%,0%,0%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [26%,24%,35%,25%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 91%@998 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [30%,27%,43%,31%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [24%,27%,29%,29%]@921 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@998 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [28%,26%,41%,30%]@1530 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 0%@998 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [27%,29%,30%,35%]@2014 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [25%,24%,32%,26%]@825 EMC 23%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [25%,26%,27%,32%]@825 EMC 23%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [23%,23%,39%,27%]@921 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@998 EDP limit 2218
RAM 1336/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [30%,28%,45%,28%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@998 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [27%,24%,38%,25%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 4%@998 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [32%,27%,35%,38%]@2014 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [25%,25%,31%,29%]@825 EMC 23%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [22%,22%,27%,33%]@825 EMC 23%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [26%,33%,26%,30%]@921 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@998 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [29%,32%,28%,38%]@2014 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 0%@998 EDP limit 2218
RAM 1335/2807MB (lfb 63x4MB) SWAP 0/0MB (cached 0MB) cpu [27%,26%,41%,26%]@825 EMC 24%@1600 AVP 0%@80 VDE 0 GR3D 99%@921 EDP limit 2218
edit 2:
oh i just noticed the script here https://devtalk.nvidia.com/default/t...nd-benchmarks/ where you posted, does this script work on our shield as well?

edit3:
k used the script, now cpu is at 2014mhz constantly, emc 1600 mhz constantly and gpu at 998 mhz constantly, the temp did in fact increase rather much lol, had to set the fan to a bit more, but 255 is too loud ^ ^

Ideally, I'd like kernel and rootfs on external SD (the way I'm running ubuntu on my Chromebook CB5). It seems the only way would be to have multiroom or u-boot working.

I made changes to makefile of Link 1 such that the ramdisk is only 480 KB instead of 2 MB.

The "maxPef.sh" scrip to set CPU/GPU clocks is critical to maximize performance.
The Following User Says Thank You to yahoo2016 For This Useful Post: [ View ] Gift yahoo2016 Ad-Free
20th December 2015, 03:33 PM |#9  
Senior Member
Thanks Meter: 394
 
More
Quote:
Originally Posted by yahoo2016

Ideally, I'd like kernel and rootfs on external SD (the way I'm running ubuntu on my Chromebook CB5). It seems the only way would be to have multiroom or u-boot working.

I made changes to makefile of Link 1 such that the ramdisk is only 480 KB instead of 2 MB.

The "maxPef.sh" scrip to set CPU/GPU clocks is critical to maximize performance.

I think you can forget uboot. Perhaps multirom, especially as the new pixel c tablet uses the same chipset and I think more people will use and develop for it. Yeah all the tests until now were with the stock governor and speeds. Have to retest all now...
20th December 2015, 06:31 PM |#10  
OP Senior Member
Thanks Meter: 83
 
More
Quote:
Originally Posted by crnkoj

I think you can forget uboot. Perhaps multirom, especially as the new pixel c tablet uses the same chipset and I think more people will use and develop for it. Yeah all the tests until now were with the stock governor and speeds. Have to retest all now...

I found someone even installed u-boot for Galaxy:

http://forum.xda-developers.com/gala...iboot-t1680898


I like my Tegra K1 based Chromebook CB5, it's truly double boot (Ctl U for external Ubuntu and Ctl D for stock ChromeOs). I'll not even consider an overpriced Pixel C which does not have SD slot or USB 3 port or HDMI port. Shield TV is much better than Pixel C for hacking. I'll look into multiRom for shield, I got impression it failed to build for arm64.
20th December 2015, 06:33 PM |#11  
Senior Member
Thanks Meter: 394
 
More
Quote:
Originally Posted by yahoo2016

I like my Tegra K1 based Chromebook CB5, it's truly double boot (Ctl U for external Ubuntu and Ctl D for stock ChromeOs). I'll not even consider an overpriced Pixel C which does not have SD slot or USB 3 port or HDMI port. Shield TV is much better than Pixel C for hacking. I'll look into multiRom for shield, I got impression it failed to build for arm64.

uhms, i meant, that now that the pixel c is out, more people will develop for it and we can use their work/cooperate
Post Reply Subscribe to Thread

Guest Quick Reply (no urls or BBcode)
Message:
Previous Thread Next Thread
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes