Swipe launches KONNECT at Rs. 4,999 Exclusively on Snapdeal        

by Shrutee K/DNS 

India, 4th August 2017: Swipe Technologies, India’s leading mobile internet technology company, is all set to unveil brand new member in its popular KONNECT Series - Swipe KONNECT Power.  The slim and sturdy smartphone sports 5” HD IPS display and boasts enormous 3000 mAh battery.  The compelling feature rich Swipe KONNECT Power is competitively priced at Rs. 4,999 only. It will be available exclusively on Snapdeal from 7th August 2017.

Swipe KONNECT Power is driven by a powerful 1.5GHz quad core processor that runs on Android 6.0 OS for lag-free multi-tasking smartphone experience. It comes with 2 GB RAM plus 16 GB internal memory, expandable up to 32 GB and OTG support enhances the storage options. The new 4G VoLTE-ready KONNECT Power features 5” HD IPS display that allows users to enjoy cleaner & sharper visuals at the highest resolution. Besides, this device has a very sleek body that makes the phone very stylish in its looks. That’s not all. For the photography enthusiast, KONNECT Power comes with 8MP Rear Camera and 5MP Front Camera. The camera app also comes with lots of features to enhance the photography. The new phone gives an extended company to the user with its 3000 mAh battery, which is installed keeping in mind the daily lifestyle and usage patterns of young smartphone users.  Commenting on the launch of KONNECT Power, Mr. Shripal Gandhi, Founder and CEO of Swipe Technologies said: “Fulfilling aspirations of demanding Indians at affordable price is what differentiates Swipe from other smartphone players.  Our latest creation is yet another affordable device under the KONNECT series – KONNECT Power. It is designed for the aspiring youth of the country who requires long lasting battery that too with 2GB RAM yet the phone is light on pocket.”
Vishal Chadha, Sr.Vice President – Business said: “As part of our commitment to the vision of Digital India, we are keen to bring our consumers access to compelling technology products at great value. With Swipe Konnect Power, we are confident that this newest exclusive addition to our smartphone assortment will be well received by our customers”
Swipe KONNECT Power comes with a gorgeous design, and it has a smoother touch experience as well as comfortable grip. Customers will be able to enjoy a greater multimedia experience, as they can capture stunning photos and videos in virtually any lighting condition.
About Swipe Technologies:
Swipe Technologies is an innovation-centric mobility solutions company, having started its operations in July 2012. Within a short span of time, Swipe has become the leading tablet and Smartphone maker in India.   Swipe was started with an aim to bring exciting devices in the growing Indian market and today, it is the leading consumer brand with key innovations across products, pricing and customer support. Founded by technocrat Mr. Shripal Gandhi, Swipe raised $5 million in May 2014 from the Venture Capital firm Kalaari Capital. In the last three years, Swipe has won a number of accolades including the “Top 50 Inspiring Entrepreneurs of India” by The Economic Times, the “Most Innovation-driven Company in India” by World Brand Congress, and the “Most Innovative Start-up” by Franchise India, the “Best Integrated Campaign” by the World Brand and Congress& the “Best Youth Brand Tablet” by CNBC. Also, Swipe’s young founder Mr. Shripal Gandhi has been recognized as the “CNBC Young Turks” for his breakthrough strategies and innovation in the field of mobile communication technologies. For details, logon to www.justswipe.com

          Brands of food processors        
Now, let’s talk about the brands a bit, and why it’s so important to buy the best food processor on the market to fit your needs. Food processors don’t come cheap that’s a fact. Well, good ones at least. And if you get one for some change money, it probably means you won’t get far at processing actual food with it. They are also very intricate pieces of machinery, so it’s not easy making them cheaply. It’s rather impossible, I would say. I’m sure I don’t have to tell you that buying the famous brand food processor is the best...
          How to choose a food processor        
If we were to pick a single category of items in everyman’s household as the most important one, I would certainly opt for the one handling the food we eat. I’m inclined to believe you would agree with me. That’s why I decided to make this ultimate guide, to help you pick the best food processor to fit your every need. There is a lot of research you could do, of course. You could spend hours after hours, reading all kinds of different good processor reviews, browsing the web for answers, and trying to connect the dots yourself. Or you...
          Quality aspects of a food processor         
As with pretty much every product out there, best food processor is a combination of certain factors. I will point them out here for you, so you could know what to look for once you decide to hit the stores, or just order one online. So, let’s begin our list of critical factors for a good quality food processor. Capabilities of your new food processor will tell you a lot about its quality. I’ll list out for you a bare minimum of operations your appliance should be able to do, and if you’re looking at one that can’t, well then,...
          Design of the blade in your food processor        
This plays a huge roll in the overall performance of your food processor. It is technically possible to sharpen the blade, but it is a very difficult thing to do, and I personally couldn’t name a single parson that I know (myself included), who does it regularly. A high quality blade for your processor will feature micro serrations, which are retaining their ability to cut for much longer period of time than your razor-like smooth edge. It’s a problem even with some of the most expensive and downright best food processors in existence. They have overpowered motors, which would obliterate...
          LG Introduces The Curved Phone G Flex 2        
LG Introduces The Curved Phone G Flex 2

Back in 2014, LG released the G Flex handset, a 6-inch smartphone with a unique curved body design and a self-healing coating that clears up scratches or any damage done to the device's casing.  

Now, the South Korean company is introducing an enhanced and more streamlined version, the G Flex 2. This time around, the newest version comes with a smaller but more handy 5.5-inch display screen, and Qualcomm's most powerful chip set.

The G Flex 2 smartphone was introduced by LG during its very recent press conference at the Consumer Electronics Show 2015 (CES 2015) held in Las Vegas, Nevada in the United States. 

With its introduction, LG's G Flex 2 handset is the first of many phones this year expected to feature Qualcomm's newest offering -- the Snapdragon 810 processor. It is an eight-core 2.0 GigaHertz chip set that is 64-bit compatible for Android 5.0 Lollipop, the latest version of the Android mobile operating system. The 810 processor functions with 2 gigabytes of memory, and can support 3 x 20 MegaHertz LTE carrier aggregation for all networks that support it.

For the device's display screen, LG employed its own plastic OLED display technology. Techies may recall that this is the same display technology utilized in LG's G Watch R wearable device. But in G Flex 2's case, the resolution is set higher to 1080 pixels. 

As for its camera, the G Flex 2 smartphone sports a 13-megapixel camera that has laser autofocus system and optical image stabilization -- features that users may also find in LG's current flagship device (the LG G3).

The G Flex 2 features a 3,000 mAh battery that takes full advantage of Qualcomm's fast charging technology available via the latest Snapdragon chip sets. This technology significantly reduces the recharging time by up to 75 percent. Thus, users can get the G Flex 2 half-charged in just 40 minutes.

But what really makes the G Flex 2 unique is its curved body. The front side (display screen) has a 700 millimeter curved radius, while the rear side is curved just a bit less (650 millimeter radius). This is by design, by the way. According to LG, this helps users hold the phone more easily and also, allows the device to fit more easily in users' pockets.

Naturally, the G Flex 2 will be launched first in South Korea before the end of the month. But American users will not have long to wait. Major carriers AT&T and Sprint have already confirmed that they will be including the device in their respective smartphone line-ups this year. Regional carrier US Cellular has also announced that they will be offering the device in spring. No specific release dates and pricing details have been provided yet, but LG fans in the US should be pretty excited nonetheless. 

Want to know more LG devices? You can start comparing LG phones and plans now.

Page Type: 
Post Type: 
Category: 
Operating System: 

          Nexus 6 Pre-Orders Start November 12th At AT&T; Goes On Sale November 14th At Sprint        
Nexus 6 Preorders at ATT and Sprint

Good news for Nexus 6 fans. Google's latest Nexus phablet will start to become available for purchase at two major wireless carriers this week. 

AT&T has confirmed that the Nexus 6 will become available for pre-ordering starting on Wednesday, November 12th. The device is priced at $249 on-contract. Buyers, however, can opt to purchase the phablet at its full retail price of $682.99 and pay off that total amount over the course of 20, 24, or 30 regular monthly payments via AT&T's Next 12, Next 18, and Next 24 plans, respectively. Unfortunately, AT&T has not yet provided any shipping estimates for the Nexus 6 as of this writing. But it will not be long now until the carrier makes announcement regarding availability.

Two days later on Friday, November 14th, the Nexus 6 will become available on Sprint's stores. As announced by Sprint, the phablet will be priced at $696 off-contract, with $0 down payment and 24 regular monthly payments of $29 through the carrier's Easy Pay plan. 

Also, it should be noted that Sprint will only be offering the Midnight Blue 32 gigabyte edition of the Nexus 6 smartphone.

As one might notice, the device does have a rather high price tag. That may not be too surprising. The Nexus 6 smartphone after all sports an enormous 6-inch 2560 by 1440 resolution display screen, and a 13-megapixel rear camera. 

Hardware-wise, it packs a Snapdragon 805 processor, 32 gigabytes (or 64 gigabytes) of internal storage, 3 gigabytes of random access memory (RAM), and a powerful 3,220 mAh battery.

It is also the first smartphone to have Android 5.0 Lollipop, the newest version of the Android mobile operating system. This updated edition of the world's most popular smartphone OS is truly a joy to behold, featuring a neater design further enhanced with animations and a more user-friendly interface and environment.

In the last two weeks, the Nexus 6 has only been made available through the Google Play Store and directly from Motorola, the phone maker who collaborated with Google in bringing the phablet to life.

In a few days, users will be able to get them from two of the four major wireless carriers in 
the United States. While AT&T and Sprint have already joined the party, many are expecting the other big boys (T-Mobile and Verizon Wireless) to follow suit.

Interested to learn more about AT&T and Sprint offers? Browse AT&T phones and plans here, or explore Sprint deals here.

Page Type: 
Post Type: 
Operating System: 

          Droid X360 goes for the KIRF prize, antagonizes Microsoft, Motorola and Sony at the same time (video)        

Droid X360 PS Vita clone goes for the KIRF prize, antagonizes Microsoft, Motorola and Sony at the same time

Can we establish a KIRF award for Most Likely to Invite Multiple Lawsuits? If so, Long Xun Software would have to claim the statuette for its Droid X360, at least if it dared set foot in the US. This prime example of keepin' it real fake is even more of a PS Vita clone than the Yinlips YDPG18, but goes the extra mile with a name that's likely to irk Microsoft, Motorola, Verizon and George Lucas all at once. That's even discounting the preloaded emulators for just about every pre-1999 Nintendo, Sega and Sony console. Inside, you'll at least find a device that's reasonably up to snuff: the 5-inch handheld is running Android 4.0 on a 1.5GHz single-core Quanzhi A10 processor, 512MB of RAM, 8GB of built-in space, a 2-megapixel camera at the back and a VGA shooter at the front. If the almost gleeful amount of copyright and trademark violation isn't keeping you from wanting this award-winner, you'll have to ask Long Xun for pricing and availability.


          Ayam Masak Cili Berlada Simple        
Hi korang....


Rasa nak break jap posting pasal tunang. Tapi, nak post resipi baru favourite cik abam kite. Selama ni kita masak sambal ayam je sebab dia memang suka sangat kalau sambal-sambal ni, lebih-lebih lagi sambal telur mata. Pernah dalam 1 minggu tu, hari-hari sambal telur sampai kita sendiri pun dah mual.


Rupanya, masak cili berlada ni lagi dia sukaaaaa. Sekali makan tu kalau 3 ketul ayam, memang dia soranglah. Bertambah makan nasik (dah lah nak tunang - kawen, bukan makin diet, makin selera makan).. kihkihkih.. ok jom tengok bahan-bahan. Kali ni wanie rajin sikit, so banyak gambar wanie ambik.. hehehe


Bahan-bahannya (cadangan dalam 2 orang makan je)

  • Ayam digaul garam kunyit (dalam resepi ni wanie guna 3 ketul ayam)
  • 1 biji bawang besar
  • 3 ulas bawang putih
  • 4 batang cili merah
  • 1 cm halia
  • Air asam jawa
  • 1 senduk kecil cili kering kisar
  • Garam dan gula
  • Minyak memasak

Cara-caranya

1.  Goreng dulu ayam



2.  Dalam masa yang sama, potong halia, bawang besar, bawang putih dan cili merah.
**OPTIONAL : kalau korang jenis makan pedas, korang boleh tambah cili padi ye.


3.  Kalau korang ada food processor, masukkan kesemua bahan hirisan ni dalam food processor je. Kalau takde, boleh je blend kasar kasar. Time blend tu, masuk air sikit tapi jangan banyak sangat. 



Haaaa... blend biar jadi macam ni..

4.  Jangan lupa toskan ayam kalau dah masak. Jangan biar rentung pulak. Hahahah.. ok, gunakan minyak goreng ayam tadi, sedikit je dalam 5-6 sudu besar untuk tumis bahan kisar. Time tumis, masukkan sekali dengan cili kisar / cili boh. Tumis sampai masak / pecah minyak.


rupa awal-awal masuk dalam kuali


ni rupa dah nak masak

5.  Masukkan air asam jawa, garam dan sedikit gula. Kemudian rasalah sikit sampai dapat rasa yang diingini, kemudian masukkan ayam goreng yang ditos tadi. Kacau-kacau sikit, dah siappppp!



inilah rupanya... cik abam kata tak payah masak sambal yang favourite dia pun tak pe dah, masak yang ni je.... hahahahaah. Selamat mencuba ya! Simple je kan? ;-)




          Resepi Kuah Kacang Nasi Impit / Satey Simple        
Selamat Hari Raya korang!!! Hahaha.. esok raya kan? So macam mana dengan persiapan raya? Eh, malas la nak cakap.. stress sebab takleh balik raya. Wakakaka! Semalam lepas balik keja, terus masuk dapur. Ingat nak masak untuk berbuka puasa, tapi mata macam terpandang kacang tanah elokkkk je kat rak. Tetiba je lapar nasi impit kuah kacang. Actually ni post semalam, blogging dari office. Lupa nak post.. hahahah


Maka terhasil lah outputnya..

bersepahnya haku makan.. hahahahahaha.. nampak sangat lapor


OK jom share resepi nye.. senang je!


Bahan-bahannya (sebab wanie dengan housemate 2-3 orang je nak makan ni, so sukatan tak banyak)

-- 300gm kacang tanah
-- 1 senduk cili kisar
-- 2 biji bawang besar
-- 2 ulas bawang putih
-- 1 inci halia
-- 1 batang serai (diketuk)
-- Sedikit belacan
-- Setengah inci lengkuas
-- 1 ketul gula melaka (orang utara panggil gula gerek.. hahaha)
-- Air asam jawa
-- Air
-- Gula pasir
-- Garam
-- Minyak menumis


Cara-caranya

1.  Kacang tanah disangai.

sangai macam ni lah.. selalunya orang goreng tanpa minyak, tapi wanie letak dalam 1 sudu minyak sikit jeeeeee untuk bagi cepat masak. tapi dalam gambar terlebih minyak.. ahhh gasak! hahaha~


2.  Kacang yang dah masak, diasingkan kulit dan isi. Lepas tu tumbuk kalau rajin atau blend sajork! Setengah orang macam wanie tak suka blend sebab nanti dia jadi macam halus. So, wanie pakai food processor yang ofis bagi haritu. Jadinya kacang tu tak halus dan tak kasar sangat. Camni ghupernye food processor tu :


3.  Bawang besar, bawang putih, lengkuas, halia dan belacan blend halus.

4.  Panaskan minyak dan tumis bahan blend bersama serai sampai wangi. Kemudian, masukkan cili kisar tadi. Tumis sampai pecah minyak.

5.  Masukkan kacang, air asam jawa dan air sebagai kuah. Tuang slow-slow ye, jangan mencair sangat. Kemudian perasakan dengan gula melaka (wanie sagat gula melaka tu, sagat guna penyagat carrot/timun pun boleh.. lagi cepat), gula pasir dan garam. Sukatan gula melaka tu seketul macam dalam gambar ni :


6.  Masak dengan api perlahan (kalau ghajen) sebab orang kata kalau kuah kacang masak lama sikit boleh tahan lama. Kalau semua dah cukup rasa, pegi la potong nasi impit makan ngan kuah kacang ni. Makan ngan satey pun sodap!


Siap! :-)


          AMD Ryzen Downcore Control        
AMD Ryzen 7 processors comes with a nice feature: the downcore control. This feature allows to enable / disable cores. Ryzen 7 and Ryzen 5 chips use the same die, which is made up of two CCX (Cpu...

          Home computer repair - Home computer repair Tampa troubleshooting        
The basic parts of a Home computer repair are monitor, motherboard, SMPS, processor, daughter board, network cards, RAM, CMOS battery, hard disk, buses (cables), keyboard, mouse, UPS and modem. So these components will face damage as they are electronic components.


 Long beep sound heard while starting custom pc repair tampa and OS is not loading. Your RAM is damaged. So buy a RAM either D RAM or SD RAM after checking the slot in your motherboard and install new RAM.


Your Operating system OS may be damaged due to improper shutdown or due to missing of some files. Please re install your OS. Another problem can be your hard disk damage.


Huge sound coming from CPU. Your cabinet cooling fan may be making that huge noise while working. Try to replace that bad cooling fan with a good one.


Sometimes there will be problem in Home PC repair Tampa booting. So Home computer repair boot troubleshooting can be done by pressing delete button while restarting PC and providing appropriate settings in BIOS.


If you are like most people who often use computer repairs Tampa at work or in your home, then you must also know how frustrating it can be when your Home computer repair crashes in the middle of an important task. Home computer repair errors such as crashes and file corruption are often a result of registry problems. To make sure your PC always performs efficiently, you have to invest in registry repair software.


Do you use a free windows registry repair for your Laptop computer repair? There are a few guidelines that you need to keep in mind when purchasing software and these are:


It is important to note however, that software that can find many errors aren’t necessarily the best kind. Some dubious repair programs list errors where there aren’t actually any so choose well.


Unless you are a certified Home computer repair genius, it is best to choose user friendly software. Everyone needs free window registry repair programs.



 



Some Home computer repairs actually come with a built-in registry repair program and while you may be tempted to fix the registry yourself, remember that you might inadvertently delete information that is critical to Home computer repair operations.


About the author: Tampa pc pros inc takes pride in offering a wide range of computing solutions like: Pc Repair Tampa , Computer Repair Tampa , Custom Pc Repair Tampa , Computer Repair Service Tampa , Virus Removal Service Tampa.

computer security help: data backup

computer security help: data backup

Article Source: www.articlesnatch.com


          How Plastic We've Become        

Our bodies carry residues of kitchen plastics

Food for Thought

In the 1967 film classic The Graduate, a businessman corners Benjamin Braddock at a cocktail party and gives him a bit of career advice. "Just one word…plastics."

Although Benjamin didn't heed that recommendation, plenty of other young graduates did. Today, the planet is awash in products spawned by the plastics industry. Residues of plastics have become ubiquitous in the environment—and in our bodies.

A federal government study now reports that bisphenol A (BPA)—the building block of one of the most widely used plastics—laces the bodies of the vast majority of U.S. residents young and old.

Manufacturers link BPA molecules into long chains, called polymers, to make polycarbonate plastics. All of those clear, brittle plastics used in baby bottles, food ware, and small kitchen appliances (like food-processor bowls) are made from polycarbonates. BPA-based resins also line the interiors of most food, beer, and soft-drink cans. With use and heating, polycarbonates can break down, leaching BPA into the materials they contact. Such as foods.

And that could be bad if what happens in laboratory animals also happens in people, because studies in rodents show that BPA can trigger a host of harmful changes, from reproductive havoc to impaired blood-sugar control and obesity (SN: 9/29/07, p. 202).

For the new study, scientists analyzed urine from some 2,500 people who had been recruited between 2003 and 2004 for the National Health and Nutrition Examination Survey (NHANES). Roughly 92 percent of the individuals hosted measurable amounts of BPA, according to a report in the January Environmental Health Perspectives. It's the first study to measure the pollutant in a representative cross-section of the U.S. population.

Typically, only small traces of BPA turned up, concentrations of a few parts per billion in urine, note chemist Antonia M. Calafat and her colleagues at the Centers for Disease Control and Prevention. However, with hormone-mimicking agents like BPA, even tiny exposures can have notable impacts.

Overall, concentrations measured by Calafat's team were substantially higher than those that have triggered disease, birth defects, and more in exposed animals, notes Frederick S. vom Saal, a University of Missouri-Columbia biologist who has been probing the toxicology of BPA for more than 15 years.

The BPA industry describes things differently. Although Calafat's team reported urine concentrations of BPA, in fact they assayed a breakdown product—the compound by which BPA is excreted, notes Steven G. Hentges of the American Chemistry Council's Polycarbonate/BPA Global Group. As such, he argues, "this does not mean that BPA itself is present in the body or in urine."

On the other hand, few people have direct exposure to the breakdown product.

Hentges' group estimates that the daily BPA intake needed to create urine concentrations reported by the CDC scientists should be in the neighborhood of 50 nanograms per kilogram of bodyweight—or one millionth of an amount at which "no adverse effects" were measured in multi-generation animal studies. In other words, Hentges says, this suggests "a very large margin of safety."

No way, counters vom Saal. If one applies the ratio of BPA intake to excreted values in hosts of published animal studies, concentrations just reported by CDC suggest that the daily intake of most Americans is actually closer to 100 micrograms (µg) per kilogram bodyweight, he says—or some 1,000-fold higher than the industry figure.

Clearly, there are big differences of opinion and interpretation. And a lot may rest on who's right.

Globally, chemical manufacturers produce an estimated 2.8 million tons of BPA each year. The material goes into a broad range of products, many used in and around the home. BPA also serves as the basis of dental sealants, which are resins applied to the teeth of children to protect their pearly whites from cavities (SN: 4/6/96, p. 214). The industry, therefore, has a strong economic interest in seeing that the market for BPA-based products doesn't become eroded by public concerns over the chemical.

And that could happen. About 2 years after a Japanese research team showed that BPA leached out of baby bottles and plastic food ware (see What's Coming Out of Baby's Bottle?), manufacturers of those consumer products voluntarily found BPA substitutes for use in food cans. Some 2 years after that, a different group of Japanese scientists measured concentrations of BPA residues in the urine of college students. About half of the samples came from before the switch, the rest from after the period when BPA was removed from food cans.

By comparing urine values from the two time periods, the researchers showed that BPA residues were much lower—down by at least 50 percent—after Japanese manufacturers had eliminated BPA from the lining of food cans.

Concludes vom Saal, in light of the new CDC data and a growing body of animal data implicating even low-dose BPA exposures with the potential to cause harm, "the most logical thing" for the United States to do would be to follow in Japan's footsteps and "get this stuff [BPA] out of our food."

Kids appear most exposed

Overall, men tend to have statistically lower concentrations of BPA than women, the NHANES data indicate. But the big difference, Calafat says, traces to age. "Children had higher concentrations than adolescents, and they in turn had higher levels than adults," she told Science News Online.

This decreasing body burden with older age "is something we have seen with some other nonpersistent chemicals," Calafat notes—such as phthalates, another class of plasticizers.

The spread between the average BPA concentration that her team measured in children 6 to 11 years old (4.5 µg/liter) and adults (2.5 µg/L) doesn't look like much, but proved reliably different.

The open question is why adults tended to excrete only 55 percent as much BPA. It could mean children have higher exposures, she posits, or perhaps that they break it down less efficiently. "We really need to do more research to be able to answer that question."

Among other differences that emerged in the NHANES analysis: urine residues of BPA decreased with increasing household income and varied somewhat with ethnicity (with Mexican-Americans having the lowest average values, blacks the highest, and white's values in between).

There was also a time-of-day difference, with urine values for any given group tending to be highest in the evening, lowest in the afternoon, and midway between those in the morning. Since BPA's half-life in the body is only about 6 hours, that temporal variation in the chemical's excretion would be consistent with food as a major source of exposure, the CDC scientists note.

In the current NHANES paper, BPA samples were collected only once from each recruit. However, in a paper due to come out in the February Environmental Health Perspectives, Calafat and colleagues from several other institutions looked at how BPA excretion varied over a 2-year span among 82 individuals—men and women—seen at a fertility clinic in Boston.

In contrast to the NHANES data, the upcoming report shows that men tended to have somewhat higher BPA concentrations than women. Then again both groups had only about one-quarter the concentration typical of Americans.

The big difference in the Boston group emerged among the 10 women who ultimately became pregnant. Their BPA excretion increased 33 percent during pregnancy. Owing to the small number of participants in this subset of the study population, the pregnancy-associated change was not statistically significant. However, the researchers report, these are the first data to look for changes during pregnancy and ultimately determining whether some feature of pregnancy—such as a change in diet or metabolism of BPA—really alters body concentrations of the pollutant could be important. It could point to whether the fetus faces an unexpectedly high exposure to the pollutant.

If it does, the fetus could face a double whammy: Not only would exposures be higher during this period of organ and neural development, but rates of detoxification also would be diminished, vom Saal says.

Indeed, in a separate study, one due to be published soon in Reproductive Toxicology, his team administered BPA by ingestion or by injection to 3-day-old mice. Either way, the BPA exposure resulted in comparable BPA concentrations in blood.

What's more, that study found, per unit of BPA delivered, blood values in the newborns were "markedly higher" than other studies have reported for adult rodents exposed to the chemical. And that makes sense, vom Saal says, because the enzyme needed to break BPA down and lead to its excretion is only a tenth as active in babies as in adults. That's true in the mouse, he says, in the rat—and, according to some preliminary data, in humans.

Vom Saal contends that since studies have shown BPA exhibits potent hormonelike activity in human cells at the parts-per-trillion level, and since the new CDC study finds that most people are continually exposed to concentrations well above the parts-per-trillion ballpark, it's time to reevaluate whether it makes sense to use BPA-based products in and around foods.


If you would like to comment on this Food for Thought, please see the blog version.


          RE: Apple just released their new budget PC        
Actually, does this mean they've released the edu model ot the general public? It certainly looks like it. It actually looks usable, too. 160GB instead of 80GB drive, plus the new processor.
          RE: Drop in        
Apple just did it in the iMac upgrade by the look of things. The 17" and 20" are identical but for the processor, which suggests they probably just dropped a different processor into the socket.
          AMD Ryzen 5 1500X 3.5GHz 16MB L3 Box processor        

Normale prijs: € 209,95

Aanbiedingsprijs: € 191,95


          AMD Ryzen 5 1600 3.2GHz 16MB L3 Box processor        

Normale prijs: € 239,95

Aanbiedingsprijs: € 218,95


          AMD Ryzen 5 1400 3.2GHz 8MB L3 Box processor        

Normale prijs: € 189,95

Aanbiedingsprijs: € 168,95


          Intel Core i7-7820X 3.6GHz 11MB L3 Box processor        

Normale prijs: € 649,00

Aanbiedingsprijs: € 619,00


          be quiet! Silent Loop Processor water & freon koeler        

Normale prijs: € 149,95

Aanbiedingsprijs: € 138,95


          Intel Core i5-7640X 4GHz 6MB Smart Cache Box processor        

Normale prijs: € 289,95

Aanbiedingsprijs: € 254,00


          Intel Core i7-7740X 4.3GHz 8MB Smart Cache Box processor        

Normale prijs: € 419,95

Aanbiedingsprijs: € 346,00


          Special 318: Samsung Galaxy S8 Announcement        

TWiT Live Specials (Video-HD)

Samsung announces the Galaxy S8 and Galaxy S8+. The new 5.8" and 6.2" phones feature Samsung's new Infinity Display, 10mn octa-core processors, and dual pixel 12-megapixel rear-facing and 8-megapixel front-facing cameras, both with f/1.7 aperture. Samsung also announced new IoT integrations, the new Gear 360 camera, and Bixby, their new virtual assistant. The Samsung Galaxy S8 and S8 Plus will be available April 21st, with pre-orders starting tomorrow.

Hosts: Jason Howell and Ron Richards

Download or subscribe to this show at https://twit.tv/shows/twit-live-specials.

Thanks to CacheFly for the bandwidth for this special presentation.


          Special 314: Unboxing the Surface Studio        

TWiT Live Specials (Video-HD)

Microsoft's answer to the iMac, the Surface Studio, arrived at the Eastside Studio in Petaluma and Leo Laporte unboxed the elegant workstation.

The versatile machine has a 28" ultra thin PixelSense display, an Intel Core processor, as well as a 1 TB hybrid hard drive with 64 GB SSD. The Surface Pro also packs 8 GB of RAM, as well as Nvidia's GTX 956M. The unique design, with its hinge, allows the user to transform the machine from a traditional desktop to a tablet-like device. The Surface Dial can be used to customize shortcuts and provide a user interface for a variety of programs, including Spotify, Word, PowerPoint and Paint.

Host: Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/twit-live-specials.

Thanks to CacheFly for the bandwidth for this special presentation.


          Re: SSIS Frustration? Enter Pentaho        

Advanced ETL Processor Enterprise is also a good alternative to SSIS
http://www.etl-tools.com/ad...


          Re: Refactoring Day 9 : Extract Interface        

 I appreciate you want to bring understanding to the developers, but you misunderstood the poinf of this refactoring.

Both the first and the second version of the RegistrationProcessor are equally decoupled from the type of the object called registration in your example. In other words the RegistrationProcessor does know nothing about the actual implementation of the Create method in any of the cases. Moreover it is not aware of the fact that ClassRegistration is declared as a class (and not an explicit interface).

Note that every Class exposes its interface implicitly, and the outer classes can refer to the actual implementation only through this implicit interface.

The real value of the example you provided is that you are able to limit the interface exposed by ClassRegistration to a subset interface that provides operation used for example by a Registration Processor. And this exaposes the type naming issue that was introduced to your example.

The original ClassRegistration should be named sth like ClassManagement and the extracted interface: ClassRegistration (drop the I as this bring no value to the type name).

also see the original Fowler's example: http://martinfowler.com/ref...

Marek Dec
http://marekdec.wordpress.com


          Action Alert: National Leafy Green Agreement        
From NOFA/MASS, helping us to be ever vigilant about how one-size-fits-all policy and regulation affects local smaller-scale production:   Remember the Food Safety Modernization Act (FSMA) that passed last year? We won a hard fought battle, securing appropriate food safety rules for small-to-midsized farms and processors producing fresh and healthy food for local and regional markets. […]
          Guide for Picking The Best Android Phone for You        
Sony Xperia X10 vs Nexus One vs Motorola Droid vs Acer Liquid vs Archos


Xperia X10


Nexus One


Motorola Droid


Acer Liquid



(Updated: 21st Jan 2010) The Android handset landscape has changed drastically over the past year, from a literal handful of options to – the fingers on both your hands, the toes on both your feet and all the mistresses Tiger Woods has had in the past 24 hours (OK, maybe 4 hours). You get the point though, there are quite a few options and through the course of 2010 these options will only increase.


The only other mainstream handset smartphone option that rivals the Android handset options available in 2010 will be the Windows mobile platform – and we're all rushing for it – not!


So what are the handsets to consider in 2010? The ones currently released on the market that we will look at are the Acer Liquid and Motorola Droid and an additional three to be released early 2010, the Sony Xperia X10, Google Nexus One (Passion, HTC Bravo) and Archos Phone Tablet – though we only have a handful of details on the phone.



Archos Phone


We will look at hardware and software sub-categories, and compare the phones based based on the information we have.


HARDWARE


Processor


The Nexus One and Sony Xperia X10 have the snappier Qualcomm Snapdragon 1Ghz processor onboard. The Acer Liquid has a downclocked version of the Snapdragon running at 728Mhz – perhaps to conserve battery. This would probably put the Acer Liquids performance more on par with the Motorola Droids. The Archos Phone promises to be a really fast phone with an upgraded ARM Cortex processor running at 1Ghz and also with improved GPU over Droid and iPhone.


Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Qualcomm Snapdragon QSD 8250, 1.0 GHz

Texas Instruments OMAP 3430 550 Mhz

Qualcomm Snapdragon QSD 8250, 1.0 GHz

Qualcomm Snapdragon QSD 8250, 768 MHz

ARM Cortex 1Ghz


Graphics


The Snapdragon's Adreno 200 Graphics core is phenomenal on the triangle render benchmark, coming in with a score of approximately 22 million triangles per/sec compared to approximately 7 million triangles/sec on the Motorola's SGX530. This is an important element for 3D graphics. Interestingly, the iPhone 3GS has a similar CPU to Motorola Droid but an upgraded faster SGX535 GPU which is capable of 28 million triangles/sec and 400 M Pixels/sec. Archos may get better SGX GPU.


Xperia X-10 Graphics Demo


Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Adreno 200 Graphics Core with OpenGLES 2.0

PowerVR SGX530 Graphics Core with OpenGLES 2.0

Adreno 200 Graphics Core with OpenGLES 2.0

Adreno 200 Graphics Core with OpenGLES 2.0

PowerVR SGX540?

22 M Triangles/sec

7 M Triangles/sec

22 M Triangles/sec

22 M Triangles/sec

35 M Triangles/sec

133 M Pixels/sec

250 M Pixels/sec

133 M Pixels/sec

133 M Pixels/sec

1000 M Pixels/sec

HD Decode (720p)

HD Decode (720p)

HD Decode (720p)

HD Decode (720p)



3-D Graphics Benchmark









Motorola Droid 20.7 FPS (Android 2.0).

Nexus One 27.6 FPS. (Android 2.1)

Acer Liquid 34 FPS. (Android 1.6)

Xperia X10 34FPS+ est. (Android 1.6)



Note: All phones tested running WVGA resolution 480 x 800 or 480 X 854. Different versions of Android will be a factor e.g. Android 2.0 + reproduces 16 million colors vs 56K for 1.6. Older phones such as G1, iPhone 3GS may score 25-30 FPS but they use lower 480 X 320 resolution.



Memory/Storage


The Nexus One comes in with an impeccable 512MB of RAM. This provides an element of future proofing for the hardware and puts it in a league of its own. The Xperia X10 comes with 1GB of ROM and 384 MB of RAM. The 1GB means you'll be able to have twice as many apps on your phone until Google lets you save on your removable memory. The Acer Liquid and Droid are more or less the same.




Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

RAM

512 MB

256 MB

384 MB

256 MB


Flash

512 MB

512 MB

1024 MB

512 MB



Display


The Nexus One uses an AMOLED screen which provides crispy images and more saturated colors than a TFT-LCD. It's also more energy efficient. Xperia X10 packs a 4.0 inch TFT screen with 854 x 480 resolution. Expect similar picture quality to the Motorola Droid for the Sony Ericson phone. The Archos Phone promises to deliver an interesting experience that could potentially make it the King of Androids.



Spot the difference: Top TFT-LCD screen and bottom OLED



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

800 x 480 px, 3.7 in (94 mm), WVGA,

AMOLED

854 x 480 px, 3.7 in (94 mm), WVGA,

TFT-LCD

854 x 480 px, 4.0 in (102 mm), WVGA,

TFT-LCD

800 x 480 px, 3.5 in (89 mm), WVGA,

TFT-LCD

854x 480px, 4.3 in (109mm), WVGA, AMOLED



Display Input


All standard stuff here. All are pretty much Capacitative with multi-Touch depending on the continent you buy your phone from.



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch



Battery


The Xperia X10 has the largest battery – and might I add likely the best quality battery from the lot. It's the same battery used in the Xperia X1 and it performed admirable. Talk time for the Nexus One is very good and we expect the Xperia X10 to match this or be marginally better. Of concern is Nexus Ones 3G stand-by time of 250 hours. It's worse than the other phones but not bad at a little over 10 days! Updated 21st Jan 2010 - confirmed Xperia battery times. Xperia more or less performs at the same level as the other Android phones, delivering 5 hours talk time.


Sony 1500 mAh Battery




Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

mAh

1400 Li-Po

1400 Li-Po

1500 Li-Po

1350 Li-Po


Talk/Standby 3G

hrs/hrs

7/250

5/380

5/300

5/400




Communication


The phones are all capable of 3.5G (HSDPA 7.2 Mbit/s) data transfer. The Motorola Droid and Sony Xperia X10 can give you a little bit extra supporting 10.2 Mbit/s data transfer. Obviously the network must exist to support these speeds. Motorola is the only one with Class 12 EDGE, but this is not too important in this day and age of 3G.




Nexus One, Bravo

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

HSDPA (Mbit/s)

7.2 (1700 band)

10.2

10.2

7.2


HSUPA

2.0 - 5.76

2.0-5.76

2.0-5.76

2.0-5.76


GSM

(850, 900,1800,1900)

Y

Y

Y

Y


EDGE

Class 10

Class 12

Class 10

Class 10


UMTS band 1/4/8

(2100/AWS/900)

Y

Y

Y

Y


GPS

Y

Y

Y

Y


Network

3-3.5G

3-3.5G

3-3.5G

3-3.5G




Connectivity:Bluetooth/Wifi


Nexus One is the only Android phone that currently offers 802.11n connectivity. In fact, I can't think of any other phone out there that also has 802.11n. This might be the Google Talk phone we all thought was heading our way after all! All phones have either bluetooth 2.0 or 2.1. These will essentially be the same as far as data transfer (3 Mbit/s) is concerned. Version 2.1 offers better power efficiency though and a few other enhancements.


Nexus One - Broadcom 802.11n



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Bluetooth

2.1 + EDR

2.1 + EDR

2.1 + EDR

2.0 + EDR

Y

802.11 b

Y

Y

Y

N

Y

802.11 g

Y

Y

Y

Y

Y

802.11 n

Y

N

N

N

Y


Ports/Connectors/Sensors


The 2GB shipped micro-SD card with the Acer Liquid is unrealistic by todays standards. The Motorola Droid offers the best deal with a 16GB micro-SD. The Sony Xperia X10 is shipped with an 8GB micro-SD card, but remember the Xperia X10 also has that slightly bigger 1GB flash memory on-board as well for and impressive total of 9GB expandable to a total of 33GB. Google decided to save on costs by only offering a 4GB micro-SD card with the Nexus One, but if the idea is to compete against the iPhone then 8GB should be the minimum. Clearly the Motorola is on the right track with 16GB shipped, and you can't ignore the impressive 1GB ROM on the Xperia X10.


SanDisk working on 128GB Micro-SD




Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Sim Card

Y

Y

Y

Y

Y

3.5 mm jack

Y

Y

Y

Y

Y

Micro USB

Y

Y

Y

Y


Shipped Micro SD/Supported (GB)

4/32

Class 2

16/32

Class 6

8/32

Unknown

2/32

Class 2


Unknown

Light Sensor

Y

Y

Y

Y


Proximity Sensor

Y

Y

Y

Y


Compass

Y

Y

Y

Y


Accelerometer

Y

Y

Y

Y


Cell/Wifi Positioning

Y

Y

Y

Y

Y



Case Material


The Motorola metal case is the sturdiest. Build quality for the Nexus One and Xperia X10 is very good. The Xperia X10 has a refelective plastic whilst the Nexus one is more industrial with teflon and metal on the bottom. Acer Liquid has average build quality, but that was always the intention with the Liquid in order to keep manufacturing costs low.



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Rubber/Plastic

Metal

Plastic

Plastic



Keyboard


If you want a physical keyboard then the Droid is your only choice in the list. The keys on the Droid keyboard are basically flush so you don't get the comfortable key separation feel on a Blackberry keyboard. The others (Droid as well) have virtual keyboards which work in portrait or landscape mode.



Droid Slide-out keyboard



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Virtual

Physical

Virtual

Virtual

Virtual


Camera


The Xperia X10 is one of the best camera phones. Sony used it's camera know how for their new smartphone lineup and it will be hard to match-up against Sony unless the other guys partner up with someone like Canon. The X10 comes with an 8.1 mp camera with X 16 digital zoom. The software has also been changed from standard Android to include typical camera options. Also included are a four face detection feature that recognizes faces in a photo and appropriately tags/files the photo. Motorola Droid comes in with a 5 mp camera with X4 digital zoom compared to the 5mp and x2 digital zoom on the Nexus One.



Xperia X10 sample photo

***Additional Photos***



Motorola Droid sample photo



Nexus One sample photo



Acer Liquid sample photo




Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Megapixel

5

5

8.1

5


Zoom

X 2

X 4

x16

1


Flash

Y

Y (dual)

Y

Y



Video


Video wise, the Nexus One, Motorola Droid and Xperia will perform roughly the same.














Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Video Res.

720x480

720x480

800x480

320x240


Flash

Y

Y

Y

N




Size/Height/Weight



Lightest and thinest is the Nexus one. Motorola is weighed down by the metal used. They all are roughly the same size as the iPhone 3Gs which comes in at 115.5 x 62.1 x 12.3 mm and weighs 135g.



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Height (mm)

119

115.8

119

115


Width (mm)

59.8

60

63

62.5


Depth (mm)

11.5

13.7

13

12.5

10

Weight (g)

130

169

135

135




SOFTWARE


OS Level


Nexus One has the most current OS level at 2.1. Motorola Droid is expected to upgrade soon as well as the Acer Liquid. The heavily customized Xperia X10 will be more of a challenge to upgrade to 2.1 because of the heavy customization.



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

2.1

2

1.6

1.6




Customization


Xperia X10 shines as far as demonstrating how customizable Android really is. The other 3 phones have very few changes to the standard Android OS.


Sony TimeScape/MediaScape



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

None

None

Rachael UI

Acer UI 3.0



Application Market


We are likely to see more App market emerge. Sony currently leads the way and Motorola and HTC (Nexus One) will follow suit as well.



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Android Market

Android Market

PlayNow, Android Market

Android Market



Media


Mediascape is an ambitious effort to add decent media functionality to Android. Sony succeeds and also introduces a fun way to organize your media. Acer has Spinlet which is not as complex as Mediascape.



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Android

Android

MediaScape

Spinlet



Social Networking


Sony again leds the customization way with Timescape. This is another good job by Sony to add extra functionality to Android. Timescape helps manage your contacts better and brings social networking and contacts onto one application.



Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Android

Android

TimeScape

Android





          Android 5.1 1GB / 16GB Smart Watch Phone + Camera        
Android 5.1 1GB / 16GB Smart Watch Phone with 2.0 MP Camera * 1.39" OLED Round Display 400*400 in resolution * MTK6580 quad core Processor * RAM 1GB + 16GB in flash * OS Android 5.1 * Bluetooth / GPS / WiFi Supported * Frequency Band: 2G: GSM 8
          JNMS and Maxi-MMC updates        
This weekend I fixed some disc emulation issues for the JNMS and Maxi-MMC boards. I had previously erroneously identified these two boards but they are different.

The JNMS board is the one in the CDI 180 player (also called the JNMS player). It is not used in any other player and contains a CDIC (CD Interface Controller) chip but no SLAVE processor.

The Maxi-MMC board is the one in the CDI 601 and 602 players. From the emulation point of view it is virtually identical to the Mini-MMC board used by the CDI 605 player, but it has a different CDIC chip version. Both boards contain a SLAVE processor.

The link between the JNMS and Maxi-MMC boards is the CDIC chip: both turn out to have the same older CDIC chip version that differs in a few details from the version used on the Mini-MMC and Mono-I boards players (I described these differences in the earlier “CD-i 180 disc playing” post).

I noticed the JNMS / Maxi-MMC link from the CD-i player type table in the July 1996 issue of The Interactive Engineer (it’s on the ICDIA site); turns out I had misinterpreted the Board column on page 4 (there’s also an error there: the 601/602 certainly do not have the 180 board!).

After noticing this I did some testing and it turns out that the CDIC modifications needed for the 180 also work for the 601, including the TOC reading problem.

I have yet to find a way to get chip version information from the CDIC chip itself, so for the time being I’ve keyed the differences on the SLAVE software version. The 180 has no such chip, the 601 has version 1.x where the 605 has version 3.x. For now I’ve assumed that version 2.x also uses the older CDIC chip, but that may be wrong (the 602 or 604 might be interesting test cases).

Having done that, I did some more digging into the TOC read issue. It turns out that the 601 ROM performs CRC validation on the subcode Q data from the lead-in area (which is where the TOC is stored), and CD-i Emulator didn’t provide a valid CRC (no other ROMs I’ve seen so far validate this in software). The ROM even has compensation for shifts of between 1 and 7 bits in the incoming subcode Q data, probably because of some hardware timing issue.

I also noticed a bug in the ROM: it always uses the first sector buffer because it takes the current buffer bit from the wrong memory location. Not that this really matters because the TOC data is repeated multiple times; half of the repetitions will always land in the first buffer anyway. The bug is fixed in the 605 ROM.

Generating a valid CRC turned out to be straightforward (it’s just a simple CRC-CCITT calculation), but the ROM wouldn’t recognize it! After some head scratching I focused on the ROXL instruction (Rotate Left with Extend) used in the validation code. It is quite an esoteric instruction; could it be that there was an emulation bug here? It turns out that there was indeed; during rotation the contents of the X flag where put in the wrong bit. After fixing this the ROM properly recognized the data and the TOC reads became just as quick as other player models.

In search of version information for the CDIC chip I looked at the emulations and found one potential point of interest: the release number displayed by the service shell. This is a special GUI shell that performs some service functions; you can get to it by inserting a specially wired test plug into input port 1.

After some digging I found that the service shell obtains this number from the SLAVE processor, so it probably does not directly correspond to a CDIC version. The number does appear to differ from other version numbers, though, at least on my two 605 players.

The service shell obtains this number using two special I$SetStt calls into the CDIC driver; extending CD-i Link to remotely perform these same calls was easy. The new -cds[tatus] option can now be used to make the special calls. Here's some representative output of the -cds A3 option:

CD status A3000000 -> A3320000

Extending CD-i Link with remote OS9 calls is actually a fairly easy way to perform some information and tracing actions; I will probably use it for sorting out other dangling issues in the near future. When possible, this technique avoids the problems of writing a fullblown memory-resident trace module.

A new public beta release of CD-i Emulator that has full JNMS and Maxi-MMC support (among other things) is scheduled before the end of this year; there are still a few other issues that need sorting out first. This release should also have better support for the PCDI board used by several portable players, including the CD-i 370.

The major player holes still remaining are the Sony IVO-10/11 players, the Kyocera player, the Bang&Olufsen TV/player combi and of course the I2M board. There is some perspective for all of these but they are not high priority; except for the latter I expect all of them to be minor hardware variations of existing boards.

The I2M board has the interesting feature that it has multiple "ROMs" downloaded from the PC software (which is available for download from ICDIA); it also has a very different way of reading from CD as this is handled by the PC. As a consequence of this, audio is probably also handled differently. I have this board booting to a blue screen where it hangs on host communication.
          SCSI support and a big surprise        
Last week I added SCSI disk support for the CD-i 60x extension board to CD-i Emulator. It took somewhat longer then I expected, though. This was mostly because the DP5380 SCSI controller chip exposes most low-level details of the SCSI protocol to the driver which means that all of these details have to be emulated.

The emulation ended up to be a more-or-less complete software implementation of the parallel SCSI-2 protocol, including most of the low-level signaling on the BSY, SEL, ATN, MSG, C/D-, I/O-, REQ and ACK lines. This is all implemented by the new CScsiBus class representing the SCSI bus that connects up to 16 instances of the CScsiPort class that each represent a single SCSI-2 bus interface. I was able to mostly avoid per-byte signaling of REQ and ACK if the target device implementation supports block transfers, a big performance win.

The new CCdiScsiDevice class emulates the DP5380 controller chip, working in conjunction with the CCdiScsiRamDevice and CCdiScsiDmaDevice classes that emulate the 32 KB of local extension SRAM and the discrete DMA logic around it that are included on the CD-i 60x extension board.

The CD-i 182 extension uses a compatible SCSI controller chip but a different DMA controller and has no local extension SRAM. I have not yet emulated these because I have almost no software to test it.

The new CScsiDevice class implements a generic SCSI device emulating minimal versions of the four SCSI commands that are mandatory for all SCSI device types: TEST UNIT READY, REQUEST SENSE, INQUIRY and SEND DIAGNOSTIC. It implements most of the boiler-plate of low-level SCSI signaling for target devices and the full command and status phases of SCSI command processing, allowing subclasses to focus on implementing the content aspects of the data transfer phase.

The CScsiFile class emulates a SCSI device backed by a file on the host PC; it includes facilities for managing the SCSI block size and the transfer of block-sized data to and from the backing file.

The CScsiDisk and CScsiTape classes emulate a SCSI disk and tape device, respectively, currently supporting a block size of 512 bytes only. Instances of these classes are connected to the SCSI bus by using the new
-s[csi]d[isk][0-7] FILE and -s[csi]t[ape][0-7] FILE options of CD-i Emulator.

The CD-i 60x extension board normally uses SCSI id 5; the built-in ROM device descriptors for SCSI disks use SCSI ids starting at zero (/h0 /h1 /h2) while the built-in device descriptor for a SCSI tape uses SCSI id 4 (/mt0). This means that the useful options with the 60x are -scsidisk0, -scsidisk1, -scsidisk2 and -scsitape 4.

I've added the new dsk subdirectory to contain disk images; tape images have no standard location as they are mostly intended for bulk-transfer purposes (see below).

Inside the CD-i player this leads to the following response to the built-in inquire command:
$ inquire -i=0
vendor identification:"CDIFAN CDIEMU SCSIDISK "

$ inquire -i=4
vendor identification:"CDIFAN CDIEMU SCSITAPE "
where the "CDIFAN " part is the vendor name and the "CDIEMU SCSIXXXX " part is the product name.

In the previous post I described a 450 MB OS-9 hard disk image that I found on the Internet. After mounting it with
-scsidisk0 mw.dsk I got the following output:
$ free /h0
"MediaWorkshop" created on: Feb 17, 1994
Capacity: 1015812 sectors (512-byte sectors, 32-sector clusters)
674144 free sectors, largest block 655552 sectors
345161728 of 520095744 bytes (329.17 of 496.00 Mb) free on media (66%)
335642624 bytes (320.09 Mb) in largest free block

$ dir -d /h0

Directory of /h0 23:49:36
ASU/ AUDIO/ CDI_BASECASE/ CINERGY/ CMDS/
COPY/ CURSORS/ DEFS/ DEMOS/ ENET/
ETC/ FDRAW/ FONTS/ FontExample/ ISP/
LIB/ MAUI/ MAUIDEMO/ MENU/ MWOS/
NFS/ README_CIN README_MWS SCRIPT/ SHARE/
SHIP/ SYS/ T2D_RUNTIME/ TEMP/ TEMPMARK/
TEST/ USR/ VIDEO/ abstract.txt bibliographic.txt
bkgd.c8 bkgd.d cdb cdb1 cdb2
cdi_opt_install chris_test cin copyright.mws copyright.txt
csd_605 custominits_cin delme dos/ file
font8x8 get globs.mod go go.mkfont
inetdb ipstat kick1a_f.c8 kick2a_f.c8 mtitle
mws net new_shell new_shell.stb scratch
screen startup_cin thelist
You can see why thought it was a MediaWorkshop disc, but on closer inspection this turned out to something quite different. Some basic scrutiny lead to the hypothesis that this is probably a disk backup of someone from Microware working on early development of the DAVID (Digital Audio Video Interactive Decoder) platform. There are various surprises on the disk which I will describe below.

Anyway, I wanted to transfer the contents to the PC as a tar archive, similar to the procedure I used for my CD-i floppy collection. After starting CD-i Emulator with a -scsitape4 mw.tar option this was simply a matter of typing the following into the terminal window:
tar cb 1/h0
This command runs the "tape archiver" program to create a tape with the contents of the /h0 directory, using a tape blocking size of 1 (necessary because my SCSI tape emulation doesn't yet support larger block sizes). The resulting mw.tar file on the PC is only 130 MB, not 450 MB which indicates that the disk is mostly empty. At some point I might use an OS-9 "undelete" program to find out if there are additional surprises.

Extracting the mw.tar file was now a simple matter of running the PC command
tar xvf mv.tar
This produced an exact copy of the OS-9 directory structure and files on the PC.

Many of the directories on the hard disk are clearly copies of various distribution media (e.g. CDI_BASECASE, CINERGY, CURSORS, ENET, FONTS, ISP, MWOS, NFS). The contents of the ENET, ISP and NFS directories at first appear to match some of my floppies, including version numbers, but on closer inspection the binaries are different. Running some of them produces "Illegal instruction" errors so I suspect that these are 68020 binaries.

The SHIP directory contains some prerelease RTNFM software; the readme talks about PES which is a type of MPEG-2 stream (Packetized Elementary Stream). Various asset directories contain versions of a "DAVID" logo.

The CMDS directory contains working versions of the Microware C compiler, identical to the ones I already had and also many other programs. It also contains some "cdb" files (configuration database?) that mention the 68340 processor.

The contents of the CMDS/BOOTOBJS directory produced a first surprise: it contains a subdirectory JNMS containing among others files named "rb1793" and "scsijnms". Could this be floppy and SCSI drivers for the CD-i 182 extension, as it contains with a 1793 floppy drive controller (the CD-i 60x uses a different one) and the player has a "JNMS" serial number?

Well, yes and no. Disassembly of the scsijnms file proved it to be compiled C code using an interface different from OS-9 2.4 drivers, so I suspect this is an OS-9 3.x driver. In any case, I cannot use it with the stock CD-i 180 player ROMs. Bummer...

And now for the big surprise: deeply hidden in a directory structure inside the innocently named COPY directory is the complete assembly source for the VMPEG video driver module "fmvdrv". At first glance it looked very familiar from my disassembly exercises on the identically-named Gate Array 2 MPEG driver module "fmvdrv", which is as expected because I had already noticed the large similarity between these two hardware generations.

The source calls the VMPEG hardware the "IC3" implementation, which matches CD-i digital video history as I know it. The Gate Array MPEG hardware would be "IC2" and the original prototype hardware would be "IC1". Furthermore, the sources contain three source files named fmvbugs1.a to fmvbugs3.a whose source file titles are "FMV first silicon bugs routines" to "FMV third silicon bugs routines". The supplied makefile currently uses only fmvbugs3.a as is to be expected for a VMPEG driver.

The fmvbugs1.a source contains some of the picture buffer manipulation logic that I've so far carefully avoided triggering because I couldn't understand it from my disassemblies, and this is now perfectly understandable: they are workarounds for hardware bugs!

As of two hours ago, I have verified that with a little tweaking and reconstruction of a single missing constants library file these sources produce the exact "fmvdrv" driver module contained in the vmpega.rom file directly obtained from my VMPEG cartridge.

In general these sources are very heavily commented, including numerous change management comments. They also include a full set of hardware register and bit names, although no comments directly describing the hardware. This should be of great help in finally getting the digital video emulation completely working.

All of the comments are English, although a few stray words and developer initials lead me to believe that the programmers were either Dutch or Belgian.

Disassembly comparisons lead me to the conclusion that careful undoing of numerous changes should result in exact sources for the GMPEGA2 driver module "fmvdrv" as well. I might even do it at some point, although this is not high priority for me.

The disk image containing all of these surprises is publicly available on the Internet since at least 2009, which is probably someone's mistake but one for which I'm very grateful at this point!
          CD-i 180 experimentation        
Early this week, CDinteractive.co.uk forum user Erroneous came by and we spent an interesting evening taking apart our CDI 18x units and figuring out serial ports.

Whereas my set consists of a CDI 180/37 and a CDI 181/37 unit, his set is the full 180/181/182 ensemble with the added bonus of supporting 220V power. I was not previously aware that such units even existed, but it turns out he has a 180/20 + 181/20 + 182/00 combination.

I’ve taken some photographs from his set, both intact and in various dismantled states, and these can be found here on the CD-i Emulator website. Nothing particularly surprising except for the small size ROM in the 182 unit, it’s only a pair of 27C512 chips which hold 32 KB each for a total of only 64 KB!

Erroneous sold me his spare CD-i 180 remote unit and serial port adapter so I now have a mostly functioning CD-i 180 set. Unfortunately, it turns out that my 180 CD drive unit has problems so I cannot play actual discs, but the set works fine using the E1 Emulator.

It turned out that his set, however, has some defect in the 181 MMC unit which prevents it from reading discs, either from the 180 CD drive or from the E1 Emulator. Using my 181 and his 180 and 182 units we managed to get a fully working set, albeit running on mixed 120V / 220V wall power!

Because at first we couldn’t get a working command prompt on the serial port of his 182 unit, he undertook to solder a spare DB9 connector to the 181/182 interconnection bus, based on the pinout of the serial adapter which attaches to that same bus (which matches the pinout I had previously figured out by tracing the circuit board). This gave output but not a working command prompt either.

It finally turned out to be a feature of the OS9 System Disk that we were using; it boots properly when you select “Floppy Application” from the “System” menu, but it’s final step starts a command prompt for the /term device and it turns out there is no such device in the 180 player. It has three (!) serial port devices but they are named /t0, /t01 and /t2 (see below) whereas the 60x players for which this disk was apparently intended do have a /term device. On the 180, avoiding the startup script by choosing “System” / “CD-RTOS” works fine however.

When we figured this out, we could get a command line prompt on either serial port, the ROMs are smart enough to select the device where a terminal is actually connected.

We confirmed that the serial I/O chip in the 182 unit is indeed an 68681 chip as I previously suspected, which supports two serial port devices of which only one has a connector on the outside of the unit. The connected device is supported with the /t0 device name, the unconnected one uses /t01. In addition to the 68070 built-in serial port this means that the 181+182 combination actually has three serial ports, but the usual hardware setup makes only one of them accessible at a time (connecting the units uses up the interconnection bus which means that the serial adapter cannot be connected at the same time).

At this point, it was getting late and Erroneous departed for home, graciously allowing me to temporarily borrow his 18x set for some more experimentation and dumping.

When the serial port allowed me to take look inside the running 180 player, it turned out that the four ROMs that I previously dumped were not in fact co-located in the address space. The “system” ROM pair lives at $180000 as expected but on Maxi-MMC it is only 256 KB; the other 256 KB ROM pair lives at $700000 (I’ve called it the “asset” ROM because it contains only a font and pictures). Leaving out the asset ROM inside CD-i Emulator gives a working player but without any background images or buttons, just the text over a black background. You can still start a CD-i application of play a CD-Audio disc, though...

Another small factoid is that the 182 ROM contains a single picture ps_child.dyuv that at first appears to be a revised version of the identically-named one in the 181 ROM, but both pictures are bitwise identical except for the module edition number and CRC. Weird...

Dumping the ROMs of Erroneous’s 181 set turned up nothing new; they are bitwise identical to the ones from my own unit (not really surprising as both units have big “1.1” stickers on the back which signifies the “final” ROM update that all Philips 18x players received shortly before the market introduction of CD-i).

Having the 182 unit ROMs I have now extended CD-i Emulator to also support the two additional serial ports, even though the second of these is not usable on the actual player! The floppy controller and the parallel and SCSI port remain for the future.

Later this week I also took apart my new CD-i 180 remote unit, which can be used over infrared but also supports a cable connection (I’ll need to make my own cable). Pictures of this are on my site here. I suspected that the interconnection would use the I2C protocol and this indeed turned out to be the case. The unit contains another 84C21 mask-programmable microprocessor labeled “REMOCON Ver. 2.0” and its I2C SDA and SCLK pins are more or less directly connected to the cable connector, which also has RESET, GND and +5V power connections. This should allow me to connect any home-brewn pointing device over I2C.

From a bit of running system and driver inspection I also found out some more details about the bus locations of the floppy and SCSI controller chips in the 182 unit. There are two surprisingly empty ROM sockets on the SCSI extension board that are probably intended for SCSI driver and application software; except for booting support the other ROMs contain none of this.

With the information learned so far I have expanded the cditypes.rul file with CD-i 180 ROM recognition and put it in the CD-i Types section of the site.

Having two working floppy drives also allowed me to review my CD-i floppy collection and most of those appear to be perfectly readable; they may yet turn out to contain something interesting.
          CD-i 180 internals        
In the previous post I promised some ROM and chip finds. Well, here goes. To understand some of the details, you'll need some microprocessor and/or digital electronics knowledge, but even without that the gist of the text should be understandable.

The CDI 181 MMC unit contains the so-called Maxi-MMC board that is not used in any other CD-i player. Its closest cousin is the Mini-MMC board that is used in the CD-i 605 and CD-i 220 F1 players (a derivative of it is used in the CD-i 350/360 players).

The Mini-MMC board uses two 68HC05 slave processors for CD and pointing device control (they are usually called SERVO and SLAVE). The Maxi-MMC board does not have these chips, but it does have two PCF80C21 slave processors labeled RSX and TRANSDUCER that perform similar functions.

From their locations on the board I surmise that the RSX performs CD control functions; I know for sure that the TRANSDUCER performs only pointing device control. The latter is connected to the main 68070 processor via an I2C bus (I've actually traced the connections); I'm not completely sure yet about the RSX.

In order to emulate the pointing devices in CD-i Emulator, I had to reverse engineer the I2C protocol spoken by the TRANSDUCER chip; this was mostly a question of disassembling the "ceniic" and "periic" drivers in the ROM. The first of these is the low-level driver that serves as the common control point for the I2C bus; the second is the high-level driver that is instantiated separately for each type of pointing device. The ROMs support three such devices: /cdikeys, /ptr and /ptr2, corresponding to the player control keys and first and second pointing devices (the first pointing device is probably shared between the infrared remote sensor and the left pointing device port). Both pointing devices support absolute (e.g. touchpad) as well as relative (e.g. mouse) positioning.

Note that there is no built-in support for a CD-i keyboard or modem (you could use a serial port for this purpose).

However, knowing the I2C protocol does not tell me the exact protocol of the pointing devices, which therefore brings me no closer to constructing a "pointing device" that works with the two front panel MiniDIN-9 connectors. Note that these connectors are physically different from the MiniDIN 8 pointing device connectors used on most other CD-i players. According to the Philips flyers, these connectors have 6 (presumably digital) input signals and a "strobe" (STB) output signal. From the signal names I can make some educated guesses about the probable functions of the signals, but some quick tests with the BTN1 and BTN2 inputs did not pan out and it could be too complicated to figure out without measurement of a connected and working pointing device.

There is, however, also an infrared remote sensor that is supposed to expect the RC5 infrared signal protocol. This protocol supports only 2048 separate functions (32 groups of 64 each) so it should not be impossible to figure out, given a suitably programmable RC5 remote control or in the best case a PC RC5 adapter. I've been thinking about building one of the latter.

There is also a third possibility of getting a working pointing device. Although the case label of the front MiniDIN 8 connecter is "CONTROL", the Philips flyers label it "IIC" which is another way of writing "I2C", although they don't give a pinout of the port. It seems plausible that the connector is connected to the I2C bus of the 68070, although I haven't been able to verify that yet (the multimeter finds no direct connections except GND, so some buffering must be involved). If there is indeed a connection, I would be able to externally connect to that bus and send and receive the I2C bus commands that I've already reverse engineered.

If even this doesn't work, I can always connect directly to the internal I2C bus, but that involves running three wires from inside the player to outside and I'm not very keen on that (yet, anyway).

Now, about the (so far) missing serial port. There is a driver for the 68070 on-chip UART in the ROMs (the u68070 driver which is accessible via the /t2 device), and the boot code actually writes a boot message to it (CD-i Emulator output):
  PHILIPS CD-I 181 - ROM version 23rd January, 1992.
Using CD_RTOS kernel edition $53 revison $00
At first I thought that the UART would be connected to the "CONTROL" port on the front, but that does not appear to be the case. Tonight I verified (by tracing PCB connections with my multimeter) that the 68070 serial pins are connected to the PCB connector on the right side (they go through a pair of SN75188/SN75189 chips and some protection resistors; these chips are well-known RS232 line drivers/receivers). I even know the actual PCB pins, so if I can find a suitable 100-pins 0.01" spaced double edge print connector I can actually wire up the serial port.

Now for the bad news, however: the ROMs do not contain a serial port download routine. They contain a host of other goodies (more below) but not this particular beast. There is also no pointing device support for this port, contrary to all other players, so connecting up the serial port would not immediately gain me anything, I still need a working pointing device to actually start a CD-i disc…

There are no drivers for other serial ports in the ROMs, but the boot code does contain some support for a UART chip at address $340001 (probably a 68681 DUART included in the CDI 182 unit which I don't have). The support, however, is limited to the output of boot messages although the ROMs will actually prefer this port over the 68070 on-chip device if they find it.

As is to be expected from a development and test player, there is an elaborate set of boot options, but they can only be used if the ROMs contain the signature "IMS-TC" at byte offset $400 (the ROMs in my player contains FF bytes at these locations). And even then the options prompt will not appear unless you press the space bar on your serial terminal during player reset.

However, adding a -bootprompt option that handles both the signature and the space bar press to CD-i Emulator was not hard, and if you use that option with the 180 ROMs the following appears when resetting the player:
  PHILIPS CD-I 181 - ROM version 23rd January, 1992.

A-Z = change option : <BSP> = clear options : <RETURN> = Boot Now

Boot options:- BQRS
As specified, you can change the options by typing letters and pressing Enter will start the boot process with the specified options.

From disassembling the boot code of the ROMs I've so far found the following options:

D = Download/Debug
F = Boot from Floppy
L = Apply options and present another options prompt (Loop)
M = Set NTSC Monitor mode
P = Set PAL mode
S = Set NTSC/PAL mode from switch
T = Set NTSC mode
W = Boot from SCSI disk (Winchester)

It could be that there's also a C option, and I've as yet not found any implementations of the Q and R options that the ROMs include in the default, but they could be hidden in OS-9 drivers instead of the boot code.

Once set, the options are saved in NVRAM at address $313FE0 as default for prompts during subsequent reboots, they are not used for reboots where the option prompt is not invoked.

Options D, F and W look interesting, but further investigation leads to the conclusion that they are mostly useless without additional hardware.

Pressing lower-case D followed by Enter / Enter results in the following:
Boot options:- BQRSd
Boot options:- BDQRS
Enter size of download area in hex - just RETURN for none
called debugger

Rel: 00000000
Dn: 00000000 0000E430 0007000A 00000000 00000000 00000001 FFFFE000 00000000
An: 00180B84 00180570 00313FE0 00410000 00002500 00000500 00001500 000014B0
SR: 2704 (--S--7-----Z--) SSP: 000014B0 USP: 00000000
PC: 00180D2E - 08020016 btst #$0016,d2
debug:
One might think that entering a download size would perform some kind of download (hopefully via the serial port) but that is not the case. The "download" code just looks at location $2500 in RAM that's apparently supposed to be already filled (presumably via an In-Circuit Emulator or something like it).

However, invoking the debugger is interesting in itself. It looks like the Microware low-level RomBug debugger that is described in the Microware documentation, although I haven't found it in any other CD-i players. One could "download" data with the change command:
debug: c0
00000000 00 : 1
00000001 00 : 2
00000002 15 : 3
00000003 00 :
Not very userfriendly but it could be done. The immediate catch is that it doesn't work with unmodified ROMs because of the "IMS-TC" signature check!

Trying the F option results in the following:
Boot options:- BQRSf
Boot options:- BFQRS
Booting from Floppy (WD 179x controller) - Please wait
This, however, needs the hardware in the CDI 182 set (it lives at $330001). I could emulate that in CD-i Emulator of course, but there's no real point at this time. It is interesting to note that the floppy controller in the CD-i 605 (which I haven't emulated either at this point) is a DP8473 which is register compatible with the uPD765A used in the original IBM PC but requires a totally different driver (it also lives at a different memory address, namely $282001).

Finally, trying the W options gives this:
Boot options:- BQRSw
Boot options:- BQRSW
Booting from RODIME RO 650 disk drive (NCR 5380 SCSI) - Please wait
Exception Error, vector offset $0008 addr $00181908
Fatal System Error; rebooting system
The hardware is apparently supposed to live at $410000 and presumably emulatable; it's identical or at least similar to the DP5380 chip that is found on the CD-i 605 extension board where it lives at $AA0000).

Some other things that I've found out:

The CDI 181 unit has 8 KB of NVRAM but it does not use the M48T08 chip that's in all other Philips players, it's just a piece of RAM that lives at $310000 (even addresses only) and is supported by the "nvdrv" driver via the /nvr device.

In the CD-i 180 player the timekeeping functions are instead performed by a RICOH RP5C15 chip, the driver is appropriately called "rp5c15".

And there is a separate changeable battery inside the case; no "dead NVRAM" problems with this player! I don't know when the battery in my player was last changed but at the moment it's still functioning and had not lost the date/time when I first powered it on just over a week ago.

The IC CARD slot at the front of the player is handled like just another piece of NVRAM; it uses the same "nvdrv" driver but a different device: /icard. According to the device descriptor it can hold 32 KB of data, I would love to have one of those!
          CD-i 180 adventures        
Over the last week I have been playing with the CD-i 180 player set. There’s lots to tell about, so this will be a series of blog posts, this being the first installment.

The CD-i 180 is the original CD-i player, manufactured jointly by Philips and Sony/Matsushita, and for a score of years it was the development and “reference” player. The newer CD-i 605 player provided a more modern development option but it did not become the “reference” player for quite some years after its introduction.

The CD-i 180 set is quite bulky, as could be expected for first-generation hardware. I have added a picture of my set to the Hardware section of the CD-i Emulator website; more fotos can be found here on the DutchAudioClassics.nl website (it’s the same player, as evidenced by the serial numbers).

The full set consists of the CDI 180 CD-i Player module, the CDI 181 Multimedia Controller or MMC module and the CDI 182 Expansion module. The modules are normally stacked on top of each other and have mechanical interlocks so they can be moved as a unit. Unfortunately, I do not have the CDI 182 Expansion module nor any user manuals; Philips brochures for the set can be found here on the ICDIA website.

Why am I interested in this dinosaur? It’s the first mass-produced CD-i player (granted, for relatively small masses), although there were presumably some earlier prototype players. As such, it contains the “original” hardware of the CD-i platform, which is interesting from both a historical and an emulation point of view.

For emulation purposes I have been trying to get hold of CD-i 180 ROMs for some years, there are several people that still have fully operational sets, but it hasn’t panned out yet. So when I saw a basic set for sale on the CD-Interactive forum I couldn’t resist the temptation. After some discussion and a little bartering with the seller I finally ordered the set about 10 days ago. Unfortunately, this set does not include a CDI 182 module or pointing device.

I had some reservations about this being a fully working set, but I figured that at least the ROM chips would probably be okay, if nothing else that would allow me to add support for this player type to CD-i Emulator.

In old hardware the mechanical parts are usually the first to fail, this being the CDI 180 CD-i Player module (which is really just a CD drive with a 44.1 kHz digital output “DO” signal). A workaround for this would be using an E1 or E2 Emulator unit; these are basically CD drive simulators that on one side read a CD-i disc image from a connected SCSI hard disk and on the other side output the 44.1 kHz digital output “DO” signal. Both the CDI 180 and E1/E2 units are controlled via a 1200 baud RS232 serial input “RS” signal.

From my CD-i developer days I have two sets of both Emulator types so I started taking these out of storage. For practical reasons I decided to use an E1 unit because it has an internal SCSI hard disk and I did not have a spare one lying around. I also dug out an old Windows 98 PC, required because the Philips/OptImage emulation software doesn’t work under Windows XP and newer, and one of my 605 players (I also have two of those). Connecting everything took me a while but I had carefully stored all the required cables as well and after installing the software I had a working configuration after an hour or so. The entire configuration made quite a bit of mechanical and fan noise; I had forgotten this about older hardware!

I had selected the 605 unit with the Gate Array AH02 board because I was having emulation problems with that board, and I proceeded to do some MPEG tests on it. It turns out the hardware allows for some things that my emulator currently does not, which means that I need to do some rethinking. Anyway, on with the 180 story.

In preparation for the arrival of the 180 set I next prepared an disc image of the “OS-9 Disc” that I created in November 1993 while working as a CD-i developer. This disc contains all the OS-9 command-line programs from Professional OS-9, some OS-9 and CD-i utilities supplied by Philips and Microware and some homegrown ones as well. With this disc you can get a fully functional command-line prompt on any CD-i player with a serial port, which is very useful while researching a CD-i player’s internals.

The Philips/Optimage emulation software requires the disc image files to include the 2-second gap before logical block zero of the CD-i track, which is not usually included in the .bin or .iso files produced by CD image tools. So I modified the CD-i File program to convert my existing os9disc.bin file by prepending the 2-second gap, in the process also adding support for scrambling and unscrambling the sector data.

Scrambling is the process of XORing all data bytes in a CD-ROM or CD-i sector with a “scramble pattern” that is designed to avoid many contiguous identical data bytes which can supposedly confuse the tracking mechanism of CD drives (or so I’ve heard). It turned out that scrambling of the image data was not required but it did allow me to verify that the CD-I File converted image of a test disc is in fact identical to the one that the Philips/Optimage mastering tools produce, except for the ECC/EDC bytes of the gap sectors which CD-I File doesn’t know how to generate (yet). Fortunately this turned out not to be a problem, I could emulate the converted image just fine.

Last Thursday the 180 set arrived and in the evening I eagerly unpacked it. Everything appeared to be in tip-top shape, although the set had evidently seen use.

First disappointment: there is no serial port on the right side of 181 module. I remembered that this was actually an option on the module and I had not even bothered to ask the seller about it! This would make ROM extraction harder, but I was not completely without hope: the front has a Mini-DIN 8 connector marked “CONTROL” and I fully expected this to be a “standard” CD-i serial port because I seemed to remember that you could connect standard CD-i pointing devices to this port, especially a mouse. The built-in UART functions of the 68070 processor chip would have to be connected up somewhere, after all.

Second disappointment: the modules require 120V power, not the 220V we have here in Holland. I did not have a voltage converter handy so after some phone discussion with a hardware-knowledgeable friend we determined that powering up was not yet a safe option. He gave me some possible options depending on the internal configuration so I proceeded to open up the CDI 181 module, of course also motivated by curiosity.

The first thing I noticed was that there were some screws missing; obviously the module had been opened before and the person doing it had been somewhat careless. The internals also seemed somewhat familiar, especially the looks of the stickers on the ROM chips and the placement of some small yellow stickers on various other chips.

Proceeding to the primary reason for opening up the module, I next checked the power supply configuration. Alas, nothing reconfigurable for 220V, it is a fully discrete unit with the transformer actually soldered to circuit board on both input and output side. There are also surprisingly many connections to the actual MMC processor board and on close inspection weird voltages like –9V and +9V are printed near the power supply outputs, apart from the expected +5V and +/–12V, so connecting a different power supply would be a major undertaking also.

After some pondering of the internals I closed up the module again and proceeded to closely inspect the back side for serial numbers, notices, etc. They seemed somewhat familiar but that isn’t weird as numbers often do. Out of pure curiosity I surfed to the DutchAudioClassics.nl website to compare serial numbers, wanting to know the place of my set in the production runs.

Surprise: the serial numbers are identical! It appears that this exact set was previously owned by the owner of that website or perhaps he got the photographs from someone else. This also explained why the internals had seemed familiar: I had actually seen them before!

I verified with the seller of the set that he doesn’t know anything about the photographs; apparently my set has had at least four owners, assuming that the website owner wasn’t the original one.

On Friday I obtained a 120V converter (they were unexpectedly cheap) and that evening I proceeded to power up the 180 set. I got a nice main menu picture immediately so I proceeded to attempt to start a CD-i disc. It did not start automatically when I inserted it, which on second thought makes perfect sense because the 181 MMC module has no way to know that you’ve just inserted a disc: this information is not communicated over 180/181 interconnections. So I would need to click on the “CD-I” button to start a disc.

To click on a screen button you need a supported pointing device, so I proceeded to connect the trusty white professional CD-i mouse that belongs with my 605 players. It doesn’t work!

There are some mechanical issues which make it doubtful that the MiniDIN connector plugs connect properly, so I tried an expansion cable that fit better. Still no dice.

The next step was trying some other CD-i pointing devices, but none of them worked. No pointing devices came with the set, and the seller had advised me thus (they were presumable lost or sold separately by some previous owner). The only remaining option seemed to be the wireless remote control sensor which supposedly uses RC5.

I tried every remote in my home, including the CD-i ones, but none of them give any reaction. After some research into the RC5 protocol this is not surprising, the 180 set probably has a distinct system address code. Not having a programmable remote handy nor a PC capable of generating infrared signals (none of my PCs have IrDA) I am again stuck!

I spent some time surfing the Internet looking for RC5 remotes and PC interfaces that can generate RC5 signals. Programmable remotes requiring a learning stage are obviously not an option so it will have to be a fully PC-programmable remote which are somewhat expensive and I’m not convinced they would work. The PC interface seems the best option for now; I found some do-it-yourself circuits and kits but it is all quite involved. I’ve also given some thought to PIC kits which could in principle also support a standard CD-i or PC mouse or even a joystick, but I haven’t pursued these options much further yet.

Next I went looking for ways to at least get the contents of the ROM chips as I had determined that these were socketed inside the MMC module and could easily be removed. There are four 27C100 chips inside the module, each of which contains 128Kb of data for a total of 512Kb which is the same as for the CD-i 605 player (ignoring expansion and full-motion video ROMs). The regular way to do this involves using a ROM reading device, but I haven’t gotten one handy that supports this chip type and neither does the hardware friend I mentioned earlier.

I do have access to an old 8 bit Z80 hobbyist-built system capable of reading and writing up to 27512 chips which are 64Kb, it is possible to extend this to at least read the 27C100 chip type. This would require adapting the socket (the 27512 is 28 pins whereas the 27C100 has 32 pins) and adding one extra address bit, if nothing else with just a spare wire. But the Z80 system is not at my house and some hardware modifications to it would be required, for which I would have to inspect the system first and dig up the circuit diagrams; all quite disappointing.

While researching the chip pinouts I suddenly had an idea: what if I used the CD-i 605 Expansion board which also has ROM sockets? This seemed an option but with two kids running around I did not want to open up the set. That evening however I took the board out of the 605 (this is easily done as both player and board were designed for it) and found that this Expansion board contains two 27C020 chips, each containing 256Kb of data. These are also 32 pins but the pinouts are a little different, so a socket adapter would also be needed. I checked the 605 technical manual and it did not mention anything about configurable ROM chip types (it did mention configurable RAM chip types, though) so an adapter seemed the way to go. I collected some spare 40 pin sockets from storage (boy have I got much of that) and proceeded to open up the 180 set and take out the ROM chips.

When determining the mechanical fit of the two sockets for the adapter I noticed three jumpers adjacent to the ROM sockets of the expansion board and I wondered… Tracing of the board connections indicated that these jumpers were indeed connected to exactly the ROM socket pins differing between 27C100 and 27C020, and other connections indicated it at least plausible for these jumpers to be exactly made for the purpose.

So I changed the jumpers and inserted one 180 ROM. This would avoid OS-9 inadvertently using data from the ROM because only half of each 16-bit word would be present, thus ensuring that no module headers would be detected, and in the event of disaster I would lose only a single ROM chip (not that I expected that to be very likely, but you never know).

Powering up the player worked exactly as expected, no suspicious smoke or heat generation, so the next step was software. It turns out that CD-i Link already supports downloading of ROM data from specific memory addresses and I had already determined those addresses from the 605 technical manual. So I connected the CD-i 605 null-modem cable with my USB-to-Serial adapter between CD-i player and my laptop and fired off the command line:

cdilink –p 3 –a 50000 –s 256K –u u21.rom

(U21 being the socket number of the specific ROM I chose first).

After a minute I aborted the upload and checked the result, and lo and behold the u21.rom file looked like an even-byte-only ROM dump:
00000000  4a00 000b 0000 0000 0004 8000 0000 0000 J...............
00000010 0000 0000 0000 003a 0000 705f 6d6c 2e6f .......:..p_ml.o
00000020 7406 0c20 0000 0000 0101 0101 0101 0101 t.. ............
This was hopeful, so I restarted the upload again and waited some six minutes for it to complete. Just for sure I redid the upload from address 58000 and got an identical file, thus ruling out any flakey bits or timing problems (I had already checked that the access times on the 27C100 and 27C020 chips were identical, to say 150ns).

In an attempt to speed up the procedure, I next attempted to try two ROMs at once, using ones that I thought not to be a matched even/odd set. The 605 would not boot! It later turned out that the socket numbering did not correspond to the even/odd pairing as I expected so this was probably caused by the two ROMs being exactly a matched set and OS-9 getting confused as the result. But using a single ROM it worked fine.

I proceeded to repeat the following procedure for the next three ROMs: turn off the 605, remove the expansion board, unsocket the previous ROM chip, socket the next ROM chip, reinsert the expansion board, turn on the 605 and run CD-i Link twice. It took a while, all in all just under an hour.

While these uploads were running I wrote two small programs rsplit and rjoin to manipulate the ROM files into a correct 512Kb 180 ROM image. Around 00:30 I had a final cdi180b.rom file that looked good and I ran it through cditype –mod to verify that it indeed looked like a CD-I player ROM:
  Addr     Size      Owner    Perm Type Revs  Ed #  Crc   Module name
-------- -------- ----------- ---- ---- ---- ----- ------ ------------
0000509a 192 0.0 0003 Data 8001 1 fba055 copyright
0000515a 26650 0.0 0555 Sys a000 83 090798 kernel
0000b974 344 0.0 0555 Sys 8002 22 b20da9 init
0000bacc 2848 0.0 0555 Fman a00b 35 28611f ucm
0000c5ec 5592 0.0 0555 Fman a000 17 63023d nrf
0000dbc4 2270 0.0 0555 Fman a000 35 d6a976 pipeman
0000e4a2 774 0.0 0555 Driv a001 6 81a3e9 nvdrv
0000e7a8 356 0.0 0555 Sys a01e 15 e69105 rp5c15
0000e90c 136 0.0 0555 Desc 8000 1 f25f23 tim070
0000e994 420 0.0 0555 Driv a00c 6 7b3913 tim070driv
0000eb38 172 0.0 0555 Driv a000 1 407f81 null
0000ebe4 102 0.0 0555 Desc 8000 2 cf450e pipe
0000ec4a 94 0.0 0555 Desc 8000 1 f54010 nvr
0000eca8 96 0.0 0555 Desc 8000 1 17ec68 icard
0000ed08 1934 0.0 0555 Fman a000 31 b41f17 scf
0000f496 120 0.0 0555 Desc 8000 61 dd8776 t2
0000f50e 1578 0.0 0555 Driv a020 16 d0a854 u68070
0000fb38 176 0.1 0777 5 8001 1 a519f6 csd_mmc
0000fbe8 5026 0.0 0555 Sys a000 292 e33cc5 csdinit
00010f8a 136 0.0 0555 Desc 8000 6 041e2b iic
00011012 152 0.0 0555 Driv a02c 22 e29688 ceniic
000110aa 166 0.0 0555 Desc 8000 8 c5b823 ptr
00011150 196 0.0 0555 Desc 8000 8 a0e276 cdikeys
00011214 168 0.0 0555 Desc 8000 8 439a33 ptr2
000112bc 3134 0.0 0555 Driv a016 11 faf88d periic
00011efa 4510 0.0 0555 Fman a003 96 a4d145 cdfm
00013098 15222 0.0 0555 Driv a038 28 122c79 cdap18x
00016c0e 134 0.0 0555 Desc 8000 2 35f12f cd
00016c94 134 0.0 0555 Desc 8000 2 d2ce2f ap
00016d1a 130 0.0 0555 Desc 8000 1 1586c2 vid
00016d9c 18082 10.48 0555 Trap c00a 6 5f673d cio
0001b43e 7798 1.0 0555 Trap c001 13 46c5dc math
0001d2b4 2992 0.0 0555 Data 8020 1 191a59 FONT8X8
0001de64 134 0.0 0555 Desc 8000 2 c5ed0e dd
0001deea 66564 0.0 0555 Driv a012 48 660a91 video
0002e2ee 62622 0.1 0555 Prog 8008 20 ec5459 ps
0003d78c 4272 0.0 0003 Data 8001 1 9f3982 ps_medium.font
0003e83c 800 0.0 0003 Data 8002 1 c1ac25 ps_icons.clut
00040000 2976 0.0 0003 Data 8002 1 0a3b97 ps_small.font
00040ba0 7456 0.0 0003 Data 8002 1 764338 ps_icons.clu8
000428c0 107600 0.0 0003 Data 8002 1 7b9b4e ps_panel.dyuv
0005cd10 35360 0.0 0003 Data 8001 1 2a8fcd ps_girl.dyuv
00065730 35360 0.0 0003 Data 8002 1 e1bb6a ps_mesa.dyuv
0006e150 35360 0.0 0003 Data 8002 1 8e394b ps_map.dyuv
00076b70 35360 0.0 0003 Data 8002 1 c60e5e ps_kids.dyuv

File Size Type Description
------------ ------ ------------ ------------
cdi180b.rom 512K cdi000x.rom Unknown CD-i system ROM
cdi180b.rom 512K cdi000x.mdl Unknown CD-i player
cdi180b.rom 512K unknown.brd Unknown board
Of course cditype didn’t correctly detect the ROM, player and board type, but the list of modules looks exactly like a CD-i player system ROM. It is in fact very similar to the CD-i 605 system ROM, the major differences are the presence of the icard and *iic drivers, the absence of a slave module and the different player shell (ps module with separate ps_* data modules instead of a single play module).

It being quite late already, I resocketed all the ROMs in the proper places and closed up both players, after testing that they were both fully functional (insofar as I could test the 180 set), fully intending to clean up and go to bed. As an afterthought, I took a picture of the running 180 set and posted it on the CD-Interactive forums as the definitive answer to the 50/60 Hz power question I’d asked there earlier.

The CD-i Emulator urge started itching however, so I decided to give emulation of my new ROM file a quick go, fully intending to stop at any major problems. I didn’t encounter any of those, however, until I had a running CD-i 180 player three hours later. I reported the fact on the CDinteractive forum, noting that there was no pointing device or disc access yet, and went to a well-deserved sleep. Both of these issues are major ones and those I postponed for the next day.

To get the new player type up and running inside CD-i Emulater, I started by using the CD-i 605 F1 system specification files cdi605a.mdl and minimmc.brd as templates to create the new CD-i 180 F2 system files cdi180b.mdl and maximmc.brd. Next I fired up the emulator and was rewarded with bus errors. Not unexpected and a good indicator of where the problems are. Using the debugger and disassembler I quickly determined that the problems were, as expected, the presence of the VSR instead of VSD and the replacement of the SLAVE by something else. Straightening these out took a bit of time but it was not hard work and very similar to work I had done before on other player types.

This time at least the processor and most of the hardware was known and already emulated; for the Portable CD-i board (used by the CD-i 370, DVE200 and GDI700 players) both of these were not the case as they use the 68341 so-called integrated CD-i engine which in my opinion is sorely misnamed as there is nothing CD-i about the chip, it is just the Motorola version of an 68K processor with many on-chip peripherals in remarkably similar to the Philips 68070 in basic functionality.

Saturday was spent doing household chores with ROM research in between, looking for the way to get the pointing device working. It turned out to be quite involved but at the end of the day I had it sort of flakily working in a kludgy way; I’ll report the details in a next blog post.

Sunday I spent some time fixing the flakiness and thinking a lot about fixing the kludginess; this remains to be done. I also spent time making screenshots and writing this blog post.

So to finish up, there is now a series of 180 screenshots here on the CD-i Emulator website as reported in the What's New section. A very nice player shell, actually, especially for a first generation machine.

I will report some ROM and chip finds including new hopes for replacing the missing pointing device in a next blog post.
          MPEG decoding, state save/restore, NRF emulation, ...        
It's been a while since I wrote anything here, but that doesn't mean that work on CD-i Emulator has stopped. On the contrary, a lot has happened in the last month and describing all of it will take a very long blog post. So here goes…

Last January an annoying date-checking bug was found which forced me to release beta2 somewhat earlier than anticipated. After that I did no further work on CD-i Emulator. There were various reasons for this, but the most import one was a very busy period at my day job.

After a well-earned vacation I resumed CD-i related work in early August. First I spent a few days on Walter Hunt's OS-9 port of gcc, the GNU C/C++ Compiler that I found in October of last year. Getting it working on a modern Cygwin installation was interesting and something very different from my usual line of work. The result could be useful for homebrew activities: it's a much more usable C compiler then the Microware OS-9 one and supports C++ as a bonus. I intend to use this for ROM-less emulation validation some day; see also below. The sources need to be released but I haven’t gotten to that stage yet.

After that I had another go at the Digital Video cartridge emulation. At the point where I left off last year the major stumbling block was the presumed picture / frame buffering logic of the MPEG video driver. When the appropriate interrupt status bits are set the driver starts copying a bulk of status information to an array of device registers and it will sometimes also read from those registers. This is all controlled by several status and timing registers that are also referenced elsewhere and I previously could not get a handle on it.

My first attempt this time was spending another few days staring at it and tracing it, but this did not gain me much new understanding. Finally I decided to just leave it for now and see how far I could get without understanding this part of the driver. I decided to once again attempt to get "CD-i Full Motion Video Technical Aspects" working.

This CD-i was produced by Philips to give future Full Motion Video (as the new MPEG playback functions were called at the time) developers a demonstration of the technical capabilities of the new hardware, at a time when this hardware was still in the early beta phase. The CD-i actually contains the compiler libraries necessary for making FMV calls from CD-i applications, as these had not previously been widely distributed.

It is not a very slick disc visually, being intended for developers, but it demonstrates a number of FMV techniques such as regular playback, playback control including pause, slow motion and single step, freeze frame and forward/backward scan, special effects like scrolling the FMV window, a seamless jump and a sample of overlay effects with the CD-i base case video planes.

I had previously tried to run this disc on CD-i Emulator, but it always crashed for an unknown reason that I attributed to MPEG device emulation problems. This time I traced back the crash and it turned out to have nothing at all to do with FMV playback but was instead caused by an incorrect emulation of the 68000 instruction "move ea,ccr" which is supposed to set the condition code register (ccr) to the value specified by the effective address (ea). In the processor manual this is classified as a word instruction and I had emulated it as such, which turned out to be wrong as it caused a word write to the full status register which should have been a byte write to the lower eight bits of it which hold the condition codes.

The problem manifested itself when the application calls the math trap handler for some mundane number calculations, which were naturally supposed to set the condition codes. The value written to the status register inadvertently changed the processor from user to system mode (and also scrambled the active interrupt masking level) which caused an instant stack switch that caused a bus error when the trap handler attempts to return to the application program (the cpu took the return address from the wrong stack and got garbage instead).

Most CD-i applications probably don't use the math trap handler so the problem went undetected for a long time. Now that it's fixed some other titles have probably started working but I haven't tested that.

After this, the FMV Technical Aspects application would get to its main menu screen, allowing me to start FMV playback operations. Regular playback worked fine until the end of the video clip, where there turned out to be status bit generation issues that prevented the application from properly detecting the end of video clip condition (the decoder is supposed to send a "buffer underflow" signal, among others, after the end of the MPEG data and my emulation didn't do that yet).

This was not very easy to fix because of the way that MPEG data buffering and decoding is handled inside CD-i Emulator, which I'll get into below. So it took me some time.

Regular play working fine, I started worrying about window control. This was the area where I feared the picture buffering stuff, but it turned out that this was easily bypassed. The horizontal / vertical scrolling functions were ideal to test this but it took me some time to get it working. There were bugs in several areas, including my integration of the MPEG video decoding code, which I took from the well-known mpeg2dec package. This code is written to decode a single video sequence and consequently did not handle image size changes without some re-initialization calls at the appropriate times. Failing that, it mostly just crashed (at the Windows application level) due to out-of-bounds video buffer accesses.

Another issue was the timing of device register updates for image size changes; I turned out to have the basic mechanism wrong and consequently the driver would keep modifying the window parameters to incorrect values.

Having all of the above fixed, I returned my attention to playback control. So far I can get the video playback properly paused, but I haven't been able to get it properly resumed. For some reason the application resumes the MPEG playback but it doesn't resume the disc playback. Since the driver waits for new data to arrive from disc before actually resuming MPEG playback nothing happens (this is documented as such). The application is presumably expecting some signal from the driver to get into the proper state for resuming disc playback, but I haven't found it yet.

At this point, it seemed promising to look at other CD-i titles using playback control and the Philips Video CD application is an obvious candidate. Again, regular playback appears to work fine, but playback control (including pause/resume) does not. It turns out that this application uses a different driver call (it uses MV_ChSpeed instead of MV_Pause, probably in preparation for possible slow motion or single step), which never completes successfully, probably again because of device status signaling. Similar issues appear to block playback control in a few other titles I tried.

I've given some thought to tracing driver calls and signals on an actual player to see what CD-i Emulator is doing wrong, and it appears to be relatively simple, there's just a bandwidth issue because all of the trace output will have to go out the serial port which can go no higher then 19200 baud. Some kind of data compression is obviously needed and I've determined a relatively simple scheme that should be enough (the CD-i player side will all need to be coded in 68000 machine language so simplicity is important!), but I haven't actually written any code for it yet.

I know there are issues with the proper timing of some video status signals. Things like start-of-sequence, end-of-sequence and start-of-picture-group should be delayed until display of the corresponding picture, at present they are delivered at decoding time, which can be a few pictures early. But that does not really affect the titles I've tried so far, because they do not attempt picture-synced operations. An application like The Lost Ride might be sensitive to thinks like this, though, and it needs to be fixed at some time. Similar issues are probably present with time code delivery. In addition, the last-picture-displayed and buffer-underflow signals are not always properly sent; I'm fixing these as I go along.

In the process, I decided that the magenta border was getting annoying and tried to fix it. That turned out to be harder then I thought. The MPEG chip has a special border color register that is written by the MV_BColor driver call and it seemed enough to just pass the color value to the MPEG window overlay routines. Well, not so. Again the issue turned out to be timing of decoder status signals, but of a different kind. The driver doesn't write the border color registers until it has seen some progress in certain timing registers related to the picture buffering thing, presumably to avoid visual flashes or something on the actual hardware. Fortunately, it turned out to be easy to simulate that progress, taking care not to trigger the complicated picture buffer code that I so far managed to keep dormant.

At some point, possibly related to slow motion or freeze frame, I might need to actually tackle that code but I hope to by that time have gained more understanding of the supposed workings of the MPEG chip.

Looking at the above, you might think that all of the difficulties are with the MPEG video decoding and that is indeed mostly true. I did have to fix something in the MPEG audio decoding, related to the pause/resume problems, and that was the updating of the audio decoder clock. When audio and video playback are synchronized the MPEG video driver uses the MPEG audio clock as it's timing reference, which means that it has to be stopped and restarted when video playback control operations occur. Since I had never before seriously tested this, the audio clock wasn't stopped at all and the video driver obligingly continued decoding and displaying pictures until it ran out of buffered data.

There is currently just one known problem with the MPEG audio decoding: the audio isn't properly attenuated as specified by the driver. This causes little audio distortions at some stream transitions and when buffers run out. There is also a problem with base case audio synchronization but that is hard to trigger and possibly even not audible in many titles so I'll worry about that much later.

Above I promised to get into the MPEG data buffering and decoding issue. The basic problem is one of conceptual mismatch: the CD-i decoding hardware gets data "pushed" into it (by DMA or direct device I/O) at the behest of the driver, whereas the MPEG decoding code (based on the publicly available mpeg2dec and musicout programs from the MPEG Software Simulation Group) expects to "pull" the data it needs during decoding. Things get messy when the decoding runs out of data, as the code does not expect to ever do so (it was originally written to decode from a disc file which of course never runs out of data until the end of the sequence). Some obvious solutions include putting the decoding in a separate thread (which given multi-core processors might be a good idea anyway from a performance perspective) and modifying it to become restartable at some previous sync point (most easily this would be the start of an audio frame or a picture or picture slice). Both options are somewhat involved although they have obvious benefits, and it may turn out that I will need to do one of them anyway at some point. For now I've avoided the problems by carefully timing calls into the MPEG decoding code so that enough data to decode a single audio frame or video picture should always be available; the MPEG data stream at the system level contains enough timestamp and buffering information to make this possible (in particular, it specifies the exact decoding time of every audio frame or video picture in relation to the timing of the data stream, thus making it possible to make those calls into the decoding code at a time that a valid MPEG data stream will have already filled the buffers far enough).

The approach depends on the timing of the MPEG data entering the decoder, which means that it does not handle buffer underflow conditions unless you add some kind of automatic decoding that continues even if no more MPEG data appears, and this is basically what I’ve done. In the end it was just relatively straightforward extension of the automatic decoding already there to handle the fact that MPEG audio streams do not have to explicitly timestamp every single audio frame (the CD-i Green Book does not even allow this unless you waste massive amounts of space in each MPEG audio data sector) and would have been needed anyway to correctly decode the last pictures of a sequence, but that had never been tested before.

For performance and possible patent reasons I have taken care to edit the MPEG decoding code (placing appropriate #ifdef lines at the right places) so that only MPEG 1 video and audio layer I/II decoding code is compiled into the CD-i Emulator executable. This is all that is needed for CD-i anyway and MPEG 2 video and audio layer III greatly complicate the decoding and thus significantly enlarge the compiled code.

Being somewhat stymied at the FMV front, I next decided to spend some time on another lingering issue. During testing, I often have to do the same exact sequence of mouse actions to get a CD-i application to a problem point and this is starting to be annoying. Input recording and playback are a partial solution to this but then you still have to wait while the application goes through it, which is also annoying and can sometimes take quite some time anyway. The obvious solution is a full emulation state save/restore feature, which I've given some thought and started implementing. It's nowhere near finished, though.

During the MESS collaboration I spent some time investigating the MESS save/restore mechanism. If at all possible I would love to be compatible for CD-i emulation states, but it turns out to be quite hard to do. The basic internal mechanism is quite similar in spirit to what I developed for CD-i Emulator, but it's the way the data is actually saved that makes compatibility very hard. Both approaches basically boil down to saving and restoring all the relevant emulation state variables, which includes easy things like the contents of cpu, memory and device registers but also internal device state variables. The latter are of course not identical between different emulators but they could probably be converted if some effort was thrown at it and for a typical device they aren't very complex anyway. The MESS implementation uses an initialization-time registration of all state variables; at save/restore time it just walks the registrations and saves or restores the binary contents of those variables. CD-i Emulator has a somewhat more flexible approach; at save/restore time it calls a device-specific serialize function to save or restore the contents of the state variables. The actual registration / serialization codes are structurally similar in the two emulators (a simple list of macro/function calls on the state variables) but the code runs at different times.

The real problem is that MESS includes very little meta information in the save files: only a single checksum of all the names and types of registered state variables in registration order. This is enough to validate the save data at restore time if the state variables of the saving emulator exactly match those of the restoring emulator, because there is no information to implement skipping or conversions. This holds between different versions or in some case even configurations of MESS emulators, but it holds even more so between MESS and CD-i Emulator! The meta information could of course be obtained from the MESS source code (relatively simply macro modifications could cause it to be written out) but that would require exact tracking of MESS versions because every version could have its own checksum corresponding to different meta information (in this case CD-i Emulator would need meta information sets for every MESS checksum value it wants to support).

I want CD-i Emulator to be more flexible, especially during development, so I decided to make full meta information an option in the save file. The saved state of every device is always versioned, which allows the save/restore code to implement explicit conversion where needed, but during development this isn't good enough. With full meta information turned on, the name and type of every state variable precedes the save data for that variable in the save file. This allows more-or-less automatic skipping of unknown state variables and when properly implemented the restore code can also handle variable reordering. At release time, I will fix the version numbers and save full metadata information sets for those version numbers so that the same automatic skipping and handling of reordering can be done even if the metadata isn't in the save file (it probably won't be because of file size considerations, although that may turn out to be a non-issue because save files need to include the full RAM contents anyway which is 1 MB of data in the simplest case without any compression, which is of course an option).

In addition to all of the above, I made some progress on the ROM-less emulation front. First I spent some time reading up on the internals of OS-9 file managers, because writing a replacement for the NRF file manager (NRF = Nonvolatile RAM File manager) seemed the logical next step. Actually writing it turned out not to be that hard, but there were of course bugs in the basic ROM emulation code. Most of them had to do with handlers not calling into the original ROM, which totally screwed up the tracing code. Some new functionality was also needed to properly read/write OS-9 data structures inside the emulated machine from the ROM emulation code; I wanted to implement this in such a way that compilation to "native" 68000 code remains a future option for ROM emulation modules. And of course the massive tracing described in the previous blog post had to be curtailed because it was impossible to see the relevant information in the morass of tracing output.

The new emulated NRF stores its files in the PC file system and it currently works fine when you start it with no stored files (i.e., the player will boot). In that case it will write out a proper "csd" (Configuration Status Descriptor) file. However, if this file already exists, the player crashes, although I have so far not found any fault in the NRF code. The origin of the problem probably lies elsewhere; I suspect it has to do with the hidden "player_shell_settings.prf" file. This file is read and written by the ROM bootstrap even before OS-9 is running; it does this by directly accessing the NVRAM memory (the file never changes size and is always the first one in NVRAM). Since the bootstrap accesses of this file do not go through the NRF file manager or even the NVRAM driver they are not redirected by the OS-9 emulation. However, later accesses by the player shell *are* redirected and the player shell does not seem able to handle the file not existing in the PC file system in the case where a csd file already exists. Solutions include extending the emulated NRF to always access this particular file from the NVRAM instead of the PC file system or somehow synchronizing the two locations for the file. The latter is probably the easiest route given the fixed location and size of the file, but the former is also useful as it would provide a full reimplementation of the original NRF that could in principle be compiled to native 68000 code to replace the "original" NRF in ROM (this is where gcc comes in as alluded to earlier, since all emulation code is written in C++).

In either case, I do not want the file manager to directly access emulated NVRAM although it could do so easily, as there is already an internal CNvramPort interface that provides just such access independent of the actual emulated NVRAM chip. The NRF file manager should instead call the NVRAM driver, which means that I need to implement cross-module calling first. It's not really hard in principle, the design has been done but there are a lot of little details to get right (the most obvious implementation uses at least 66 bytes of emulated stack space on each such call which I find excessive and might not even work; smarter implementations require some finicky register mask management or a "magic cookie" stacking approach, the latter having the best performance in the emulation case but being impossible in the native 68000 compilation case). When cross-module calling is working, I can also have the file manager allocate emulated memory and separate out the filename parsing functions by using the OS-9 system calls that provide these functions (the current emulated NRF does not allocate emulated memory which is arguably an emulation error and has the filename parsing coded out explicitly).

When everything works correctly with the emulated NRF, I have to find some way of integrating it in the user experience. You could always start over without any NVRAM files, but I'd like to have some way of migrating files between the two possible locations without having to run CD-i Emulator with weird options. Extending the CD-i File Extractor (cdifile) by incorporating (part of) the emulated NRF seems the obvious choice, which would also provide me with some impetus to finally integrate it with the CD-i File Viewer (wcdiview) program that's supposed to be a GUI version of cdifile but so far is just a very thin skeleton barely able to graphically display a single CD-i IFF image file passed on the command line (it doesn't even have a File Open menu) and will often crash. A proper implementation would look like Windows Explorer with a tree view on the left (CD-i file system, real-time channels and records, IFF chunk structure, etc) and a variable content display on the right (raw data view, decoded sector view, code disassembly view, graphical image view, audio playback, slideshow playback, decoded MPEG view, MPEG playback, etc).

That touches on another area in which I did some work last month: the saving of CD-i IFF image files for each emulated video frame. The motivation for this was to bring full-resolution real-time frame saving into the realm of the possible, as it would write only about 2 x (1024 + 280 x (384 + 32)) = 247 KB of raw CD-i video and DCP data per frame instead of 560 x 768 x 3 = 1260 KB of raw RGB. At least on my PC this has turned out not to be the case, however. The data is written out fine, which is not as easy as it sounds since video line data size can vary with each line because of pixel repeat and run-length encoding, but it's still too slow. That being so, I am not really very motivated to extend the CD-i IFF decoding implementation to actually decode this information. Some kind of compression could be an option, but that takes processor time and makes things even harder and possibly slower. Perhaps using another thread for this would be a solution, on a multi-core machine this should not greatly impact the basic emulation performance nor the debugging complexity as the compression code would be independent of the emulation itself.

So there is still a lot of work to be done, but it's all quite interesting and will provide for some entertaining evenings and weekends in the coming weeks or possibly months.

          ROM-less emulation progress        
Over the last two weeks I have implemented most of the high-level emulation framework that I alluded to in my last post here as well as a large number of tracing wrappers for the original ROM calls. In the next stage I will start replacing some of those wrappers with re-implementations, starting with some easy ones.

It turns out I was somewhat optimistic; so far I have wrapped over 450 distinct ROM entry points (the actual current number of wrappers is 513 but there are some error catchers and possible duplicates). Creating the wrappers and writing and debugging the framework took more effort then I expected, but it was worth it: every call to a ROM entry point described or implied by the Green Book or OS-9 documentation is now wrapped with a high-level emulation function that so far does nothing except calling the original ROM routine and tracing its input/output register values.

Surely there aren't that many application-callable API functions, I can hear you think? Well actually there are, for sufficiently loose definitions of "application-callable". You see, the Green Book specifies CD-RTOS as being OS-9 and every "trick" normally allowed under OS-9 is theoretically legal in a CD-i title. That includes bypassing the OS-supplied file managers and directly calling device drivers; there are many CD-i titles that do some of this (the driver interfaces are specified by the Green Book). In particular, all titles using the Balboa library do this.

I wanted an emulation framework that could handle this so my framework is built around the idea of replacing the OS-9 module internals but retaining their interfaces, including all the documented (and possibly some undocumented) data structures. One of the nice features of this approach is that native ROM code can be replaced by high-level emulation on a routine-by-routine basis.

How does it really work? As a start, I've enhanced the 68000 emulation to possibly invoke emulation modules whenever an emulated instruction generates one of the following processor exceptions: trap, illegal instruction, line-A, line-F.

The emulation modules can operate in two modes: either copy an existing ROM module and wrap its entry points, or generate an entirely new memory module. In both cases, the emulation module will emit line-A instructions at the appropriate points. The emitted modules will go into a memory area appropriately called "emurom" that the OS-9 kernel scans for modules. Giving the emitted modules identical names but higher revision numbers then the ROM modules will cause the OS-9 kernel to use the emitted modules.

This approach works for every module except the kernel itself, because it is entered by the boot code before the memory scan for modules is even performed. The kernel emulation module will actually patch the ROM kernel entry point so that it jumps to the emitted kernel module.

The emitted line-A instructions are recognized by the emulator disassembler; they are called "modcall" instructions (module call). Each such instruction corresponds to a single emulation function; entry points into the function (described below) are indicated by the word immediately following it in memory. For example, the ROM routine that handles the F$CRC system call now disassembles like this:

modcall kernel:CRC:0
jsr XXX.l
modcall kernel:CRC:$
rts

Here the XXX is the absolute address of the original ROM routine for this system call; the two modcall instructions trace the input and output registers of this handler. If the system call were purely emulated (no fallback to the original ROM routine) it would look like this:

modcall kernel:CRC:0
modcall kernel:CRC:$
rts

Both modcall instructions remain, although technically the latter is now unnecessary, but the jsr instruction has disappeared. Technically, the rts instruction could also be eliminated but it looks more comprehensible this way.

One could view the approach as adding a very powerful "OS-9 coprocessor" to the system.

If an emulation function has to make inter-module calls, complications arise. High-level emulation context cannot cross module boundaries, because the called module may be native (and in many cases even intra-module calls can raise this issue). For this reason, emulation functions need additional entry points where the emulation can resume after making such a call. The machine language would like this, e.g. for the F$Open system call:

modcall kernel:Open:0
modcall kernel:Open:25
modcall kernel:Open:83
modcall kernel:Open:145
modcall kernel:Open:$
rts

The numbers following the colon are relative line numbers in the emulation function. When the emulation function needs to make a native call, it pushes the address of one such modcall instruction on the native stack, sets the PC register to the address it wants to call and resumes instruction emulation. When the native routine returns, it will return to the modcall instruction which will re-enter the emulation function at the appropriate point.

One would expect that emulation functions making native calls need to be coded very strangely: a big switch statement on the entry code (relative line number), followed by the appropriate code. However, a little feature of the C and C++ languages allows the switch statement to be mostly hidden. The languages allow the case labels of a switch statement to be nested arbitrarily deep into the statements inside the switch.

The entire contents of emulation functions are encapsulated inside a switch statement on the entry number (hidden by macros):

switch (entrynumber)
{
case 0:
...
}

On the initial call, zero is passed for entrynumber so the function body starts executing normally. Where a native call needs to be made, the processor registers are set up (more on this below) and a macro is invoked:

MOD_CALL(address);

This macro expands to something like this:

MOD_PARAMS.SetJumpAddress(address);
MOD_PARAMS.SetReturnLine(__LINE__);
return eMOD_CALL;
case __LINE__:

Because this is a macro expansion, both invokations of the __LINE__ macro will expand to the line number of the MOD_CALL macro invokation.

What this does is to save the target address and return line inside MOD_PARAMS and then return from the emulation function with value eMOD_CALL. This value causes the wrapper code to push the address of the appropriate modcall instruction and jump to the specified address. When that modcall instruction executes after the native call returns, it passes the return line to the emulation function as the entry number which will dutifully switch on it and control will resume directly after the MOD_CALL macro.

In reality, the code uses not __LINE__ but __LINE__ - MOD_BASELINE which will use relative line numbers instead of absolute ones; MOD_BASELINE is a constant defined as the value of __LINE__ at the start of the emulation function.

The procedure described above has one serious drawback: emulation functions cannot have "active" local variables at the point where native calls are made (the compiler will generate errors complaining that variable initialisations are being skipped). However, the emulated processor registers are available as temporaries (properly saved and restored on entry and exit of the emulation function if necessary) which should be good enough. Macros are defined to make accessing these registers easy.

When native calls need to be made, the registers must be set up properly. This would lead to constant "register juggling" before and after each call, which is error-prone and tedious. To avoid it, it is possible to use two new sets of registers: the parameter set and the results set. Before a call, the parameter registers must be set up properly; the call will then use these register values as inputs and the outputs will be stored in the results registers (register juggling will be done by the wrapper code). The parameter registers are initially set to the values of the emulated processor registers and also set from the results registers after each call.

The following OS-9 modules are currently wrapped:

kernel nrf nvdrv cdfm cddrv ucm vddrv ptdrv kbdrv pipe scf scdrv

The *drv modules are device drivers; their names must be set to match the ones used in the current system ROM in order to properly override those. The *.brd files in the sys directory have been extended to include this information like this:

** Driver names for ROM emulation.
set cddrv.name=cdapdriv
set vddrv.name=video
set ptdrv.name=pointer
set kbdrv.name=kb1driv

The kernel emulation module avoids knowledge of system call handler addresses inside the kernel by trapping the first "system call" so that it can hook all the handler addresses in the system and user mode dispatch tables to their proper emulation stubs. This first system call is normally the I$Open call for the console device.

File manager and driver emulation routines hook all the entry points by simply emitting a new entry point table and putting the offset to it in the module header. The offsets in the new table point to the entry point stubs (the addresses of the original ROM routines are obtained from the original entry point table).

The above works fine for most modules, but there was a problem with the video driver because it is larger then 64KB (the offsets in the entry point are 16-bit values relative to the start of the module). Luckily there is a text area near the beginning of the original module (it is actually just after the original entry point table) that can be used for a "jump table" so all entry point offsets fit into 16 bits. After this it should have worked, but it didn't because it turns out that UCM has a bug that requires the entry point table to *also* be in the first 64KB of the module (it ignores the upper 16-bits of the 32-bit offset to this table in the module header). This was fixed by simply reusing the original entry point table in this case.

One further complication arose because UCM requires the initialisation routines of drivers to also store the absolute addresses of their entry points in UCM variables. These addresses were "hooked" by adding code to the initialisation emulation routine that changes these addresses to point to the appropriate modcall instructions.

All file managers and drivers contain further dispatching for the SetStat and GetStat routines, based on the contents of one or two registers. Different values in these registers will invoke entirely separate functions with different register conventions; they really must be redirected to different emulation functions. This is achieved by lifting the dispatching to the emulation wrapper code (it is all table-driven).

Most of the above has been implemented, and CD-i emulator now traces all calls to ROM routines (when emurom is being used). A simple call to get pointing device coordinates would previously trace as follows (when trap tracing was turned on with the "et trp" command):

@00DF87E4(cdi_app) TRAP[5812] #0 I$GetStt <= d0.w=7 d1.w=SS_PT d2.w=PT_Coord
@00DF87E8(cdi_app) TRAP[5812] #0 I$GetStt => d0.w=$8000 d1.l=$1EF00FD

Here the input value d0.w=7 is the path number of the pointing device; the resulting mouse coordinates are in d1.l and correspond to (253,495),

When modcall tracing is turned on, this "simple" call will trace as follows:

@00DF87E4(cdi_app) TRAP[5812] #0 I$GetStt <= d0.w=7 d1.w=SS_PT d2.w=PT_Coord
@00F86EE0(kernel) MODCALL[16383] kernel:GetStt:0 <= d0.w=7 d1.w=$59 [Sys]
@00F86D10(kernel) MODCALL[16384] kernel:CCtl:0 <= d0.l=2 [NoTrap]
@00F86D1A(kernel) MODCALL[16384] kernel:CCtl:$ =>
@00F8A460(ucm) MODCALL[16385] ucm:GetPointer:0 <= u_d0.w=7 u_d2.w=0
@00FA10A4(pointer) MODCALL[16386] pointer:PtCoord:0 <= d0.w=7
@00FA10AE(pointer) MODCALL[16386] pointer:PtCoord:$ => d0.w=$8000 d1.l=$1EF00FD
@00F8A46A(ucm) MODCALL[16385] ucm:GetPointer:$ =>
@00F86D10(kernel) MODCALL[16387] kernel:CCtl:0 <= d0.l=5 [NoTrap]
@00F86D1A(kernel) MODCALL[16387] kernel:CCtl:$ =>
@00F86EEA(kernel) MODCALL[16383] kernel:GetStt:$ =>
@00DF87E8(cdi_app) TRAP[5812] #0 I$GetStt => d0.w=$8000 d1.l=$1EF00FD

You can see that the kernel dispatches this system call to kernel:GetStt, the handler for the I$GetStt system call. It starts by doing some cache control and then calls the GetStat entry point of the ucm modules, which dispatches it to its GetPointer routine. This routine in turn calls the GetStat routine of the pointer driver, which dispatches it to its PtCoord routine. This final routine performs the actual work and returns the results, which are then ultimately returned by the system call, after another bit of cache control.

The calls to ucm:GetStat and pointer:GetStat are no longer visible in the above trace as the emulation wrapper code directly dispatches them to ucm:GetPointer and pointer:PtCoord, respectively; it doesn't even trace the dispatching because this would result in another four lines of tracing output.

As a sidenote, all of the meticulous cache and address space control done by the kernel is really wasted, as CD-i systems do not need these. But the calls are still being made, which makes the kernel needlessly slow; one major reason why calling device drivers directly is often done. Newer versions of OS-9 eliminate these calls by using different kernel flavors for different processors and hardware configurations.

The massive amount of tracing needs to be curtailed somewhat before further work can productively be done; this is what I will start with next.

I have already generated fully documented stub functions for the OS-9 kernel from the OS-9 technical documentation; I will also need to generate for all file manager and driver calls, based on the digital Green Book.

It is perhaps noteworthy that some kernel calls are not described in any of the OS-9 version 2.4 documentation that I was able to find, but they *are* described in the online OS-9/68000 version 3.0 documentation.

Some calls made by the native ROMs remain undocumented but those mostly seem to be CD-i system-control (for example, one of them sets the front display text). Of the OS-9 kernel calls, only the following ones are currently undocumented:

F$AllRAM
F$FModul
F$POSK

Their existence was inferred by the appropriate constants existing in the compiler library files, but I have not seen any calls to them (yet).
          Input record/playback, I2M emulation and a new start        
It has been more then a month since my last blog entry. There are several reasons for this, among others the very busy month of december and a mild CD-i Emulator burn-out. This blog entry is part of an attempt to restart the process and hopefully arrive at a releasable beta version in the not too distant future.

Early december I did a little work on rewriting the input record/playback code to its final specifications, using the full IFF reading/writing code that I reported on in the previous entries. It was hairy work (refactoring often is) and I more or less threw in the towel at that point.

This was followed in mid december by an attempt to regain focus. Triggered by some forum discussions, I did some work on emulating the I2M Media Playback CD-i board for the PC (in a sense another completely different player generation).

This board uses a Motorola 68341 so-called "CD-i Engine" processor chip, which is a CPU32 processor core with some on-chip peripherals (a DMA controller, two different serial interfaces, a timer, etc), a VDSC video chip and a completely undocumented "host interface" to the PC bus. So far the board does not appear to have a separate CD/Audio interface, but it does have a VMPEG chip.

I had already implemented CPU32 emulation and partial emulation of its on-chip peripherals, but this needed to be extended a bit more. The main "problem" here appeared to be that these peripherals appear to ignore address bit 23.

I also had to reverse engineer the host interface, which amounted to some disassembly and tracing. In the process I also figured out that I had built a bad ROM image (the host software generates these on the fly from "ROM fragment" files and I had mis-interpreted the script file that tells it how to do this). I used the CD-i Playback 2.2.1 files for this.

The I2M board now successfully boots OS9, including displaying some tracing messages (it turns out that it can do this either via the host interface or via the serial interface, which was reason for lot's of head scratching until the "aha!" moment). The video display gets initialized (some kind of "blue" screen) and the then board "hangs" inside a watchdog process, presumably waiting on some signal from the host telling it what to do (it's not a crash but appears to be waiting for some host interrupt). Figuring this out requires disassembly of the watchdog process and that's where I stopped, having worked on it for about three evenings.

Somewhere in between (can't remember the exact timing) I also did a test build with Visual Studio 2008, which resulted in a few portability fixes but nothing dramatic. I had hoped this would make some speed difference but I didn't notice any. I've returned to my old trusty Visual Studio 6; it's smaller and faster and has everything I need.

That's more or less the current status; I hope to resume work on the input record/playback code soon which is the last thing holding up the current beta release.
          The Day Mars Died         
Trevor Adams woke early on the morning of September 13th, 2125, his alarm ringing loudly in his ears.  Well, I suppose singing would probably be a better description.  Birds.  No birds up here and Trevor loved birds. After an exaggerated morning stretch, Trevor stumbled into the shower and then proceeded to wrap up his morning bathroom rituals.   Trevor whistled one of the bird songs from his alarm as he pulled on his overalls and spacesuit.  No helmet for the fifth straight day… company says they’re still mandatory but not a single worker is wearing it.  Why would you bother if you’re fine without it?   Atmospheric Engineer was his title.  He was proud of it.  Trevor worked tirelessly in school to prepare himself for a career in the burgeoning field of terraforming and here he was. The Mars terraforming project was a great success.  Only 12 years after terraforming technology was first patented, Trevor and his team had already set up a stable breathable atmosphere on Mars.  The achievement was, if you would forgive a brief pun, out of this world. Trevor’s task for this sol was to visit three atmospheric monitoring stations, to ensure the readings continued to be stable, and perform any necessary maintenance on the associated atmospheric processors. Despite the fact that there were almost five thousand people on Mars, Trevor was working alone in an isolated area and had been for several weeks. Munching absentmindedly on a nutrient bar, his breakfast, he strode out of the habitat unit and jumped on to his Marver (some smart ass had decided that Mars Rover = Marver.  I know…).  It was a short and uneventful trip up out of the crater and up onto the south ridge to visit ATMO-63, first workstation of the day. Having finished his breakfast, Trevor took a long swig of water from his flask and strode inside the complex automated station. It took hardly a glance at the display panels inside ATMO-63 for Trevor to understand that something was seriously wrong.   Trevor immediately began working the problem, thinking it was perhaps localized to ATMO-63. Unfortunately, the huge success of the Mars Terraforming mission had led to significant lapses in the duties of the responsible scientists and engineers.  Last night had been a particularly big party at the main hab.  No one else had checked their stations yet on this sol. The fact was that ATMO-63 was not an isolated case.  All of the atmospheric processors on Mars had been malfunctioning for hours.  In order for the system to establish a stable atmosphere, all of the processors had to work together. Trevor worked to isolate the cause of the abnormal behaviour of the terraforming machine while also attempting to contact the main habitat. Before he could get far in his work, Trevor noted something truly terrifying.  Instruments were reading a massive buildup of pure oxygen inside and around ATMO-63.  Trevor immediately turned and started to head for the exit from the station when he heard a deafening WHOOSH followed for just a fraction of a second by the most intense heat he had ever experienced. And that was it. The explosion at ATMO-63 was massive, visible all the way from the main habitat, and almost instantly every other atmospheric processor on the planet overloaded and exploded.  The interdependency that made the whole system work to build Mars’ atmosphere was what ultimately caused the deaths of all 4,876 souls on Mars that day.  Mars’ new atmosphere catastrophically collapsed. Approximately 9 months later, during the investigation into the disaster, Trevor Adams’ body would be found in the Martian sand near the ruin of ATMO-63.   Hailey Jones, a dear friend of Trevor’s, would be among those to find his remains.  She personally dug a grave for her friend and marked it with a small plaque and a tiny audio device.  For many years on calm sols one could hear bird songs drifting over the crater from the south ridge.
          Sony Xperia E4 Dual full phone specifications        
Sony Xperia E4 Dual With 5-Inch Display available at Rs. 12,490.  The Sony Xperia E4 Dual features 1 GB of RAM coupled with 1.3 GHz Quad-core processor. With an 8 GB internal storage the phone is...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
          TuneUp Utilities 2014 v14.0.1000 + Key        
TuneUp Utilities 2014 v14
TuneUp Utilities 2014 v14.0.1000 + Key | 28.97 MB

Tune Up Utilities Pro is a powerful set of Tune-Up products which optimize Windows and improve its efficiency, design and supply. This software is able to optimize Windows, boost your PC's performance, fix problems and it helped you to customize Windows.
Tune-Up Utilities package contains a complete set of software Saver System Optimization Configuration , Cleanup and maintenance to increase the efficiency of the system, keep it clean and helps solve the various problems.

A key feature of the software TuneUp Utilities:
- fix various common error in Windows
- Configure and optimize Mozilla Firefox browser
- Windows icon customization, style shift boot and welcome screens and windows
- it automatically scans the system and prevent problems The speed reduction system
- allows secure and permanent removal of your sensitive data from hard drive
- Remove all unwanted and unnecessary files from the hard disk
- Hard disk integrated powerful tool
- great cleans registry
- play Abbey random information that has been deleted by the user.
- New features enhance the speed of your connection to the Internet
- the free RAM to increase the speed and efficiency of operations
- a program Uninstall Manager, to remove the installed applications in Windows and its management
- a complete management system and the processor Running programs
- Aid effectiveness in solving standard Windows problems
- easy to customize the configuration of Windows
- a program to manage the programs registered StartUp Manager StartUp in Windows
- contains the System Information, for more information about your hardware and software system
- reviews and bugs, hard disk drive
- Find the plan that significant memory and CPU are unwarranted
- Full compatibility with the new generation of applications such as Internet Explorer, Firefox, Windows Media Player and They are optimized for Office
- Full compatibility with Windows Vista
- and ...

System Requirements
- Minimum: 800x600 screen resolution with 256 colors; recommended: 1024x768 screen resolution with 16.7 million colors
- Minimum: 60 MB free disk space; recommended: 100 MB free disk space
- CD-ROM or DVD-ROM-Drive
- Internet Explorer 6 or 7
- Internet access

Install:
1. Temporarily disconnect internet.
2. Install program.
3. Use given key to register
4. Done. Enjoy.

Download Link
          CastleStorm 2013 Steam Rip MULTi10 3DM        
CastleStorm 2013 Steam
CastleStorm 2013 Steam Rip MULTi10 3DM | 264.76 MB

Information:
Year of release : 2013
Genre : Action, Indie, Strategy
Developer : Zen Studios
Publisher : Zen Studios
Platform : PC
Language : English, German, French, Italian, Spanish, Hungarian, Japanese
Sound language : English
Publication Type : License
Medicine : Present (3DM)

Description:
A mixture of real-time strategy and tower defence, in which the player builds their own lock and defends him from attacking enemies.

System Requirements:
√ Operating system: Windows XP / Windows Vista / Windows 7/Windows 8
√ Processor: Dual Core CPU@2.00GHz
√ Memory: 1 GB
√ Hard drive space: 800 MB
√ Sound device: compatible with DirectX 9.0c
√ Video card: GeForce 8600 / Radeon HD 3670 / Intel HD 4000

Download and Play!

Screen:
CastleStorm 2013 Steam Rip


          YouTube Downloader Pro YTD 4.4 Final ML        
YouTube Downloader Pro
YouTube Downloader Pro YTD 4.4 Final ML | 11.08 MB

YouTube Downloader is software that allows you to download videos from YouTube, Facebook, Google Video, Yahoo Video, and many others and convert them to other video formats.

The program is easy to use, just specify the URL for the video you want to download and click the Ok button!

It also allows you to convert downloaded videos for Ipod, Iphone, PSP, Cell Phone, Windows Media, XVid and MP3.

You can use YouTube Downloader to download the videos of your choice from home, at the office or in school.

General Points:
• Download videos from YouTube, FaceBook, Google Video, MySpaceTV and many others
• Allows you to access YouTube videos for which you need to be 18+ years of age
• Converts video for Ipod, Iphone, PSP, Cell Phone, Windows Media, XVid and MP3
• Provides the ability to cut and select the output quality of converted videos
• Uses the FFmpeg engine to convert the videos
• Plays videos downloaded in Flash
• Extremely easy to use

System Requirement:
Intel Pentium 233 Mhz (or equivalent processor, such as AMD) or better
Windows XP/Vista/7/8
Internet Explorer 6.0 or higher
64 MB of RAM
Adobe Flash Player 9+

What's New in This Release:
>You can now download videos from more streaming sites.
>If your internet connection is limited, you can now save videos with lower quality.
>For large files, conversion data will be more accurate.
>Improved the download speed for smaller videos.

No Password

Download Link
          Internet Download Manager IDM 6.17 Build 2 Final        
IDM 6.17 Build 2 Final
Internet Download Manager IDM 6.17 Build 2 Final | 5.09 MB

Internet Download Manager (IDM) is a tool to increase download speeds by up to 5 times, resume and schedule downloads. Comprehensive error recovery and resume capability will restart broken or interrupted downloads due to lost connections, network problems, computer shutdowns, or unexpected power outages. Simple graphic user interface makes IDM user friendly and easy to use.Internet Download Manager has a smart download logic accelerator that features intelligent dynamic file segmentation and safe multipart downloading technology to accelerate your downloads. Unlike other download managers and accelerators Internet Download Manager segments downloaded files dynamically during download process and reuses available connections without additional connect and login stages to achieve best acceleration performance.

Internet Download Manager supports proxy servers, ftp and http protocols, firewalls, redirects, cookies, authorization, MP3 audio and MPEG video content processing. IDM integrates seamlessly into Microsoft Internet Explorer, Netscape, MSN Explorer, AOL, Opera, Mozilla, Mozilla Firefox, Mozilla Firebird, Avant Browser, MyIE2, and all other popular browsers to automatically handle your downloads. You can also drag and drop files, or use Internet Download Manager from command line. Internet Download Manager can dial your modem at the set time, download the files you want, then hang up or even shut down your computer when it's done.

Other features include multilingual support, zip preview, download categories, scheduler pro, sounds on different events, HTTPS support, queue processor, html help and tutorial, enhanced virus protection on download completion, progressive downloading with quotas (useful for connections that use some kind of fair access policy or FAP like Direcway, Direct PC, Hughes, etc.), built-in download accelerator, and many others.

Version 6.17 adds Windows 8.1 compatibility, adds IDM download panel for web-players that can be used to download flash videos from sites like YouTube, MySpaceTV, and Google Videos. It also features complete Windows 7 and Vista support, YouTube grabber, redeveloped scheduler, and MMS protocol support. The new version also adds improved integration for IE 10 and IE based browsers, redesigned and enhanced download engine, the unique advanced integration into all latest browsers, improved toolbar, and a wealth of other improvements and new features.

What's new in version 6.17 Build 2
(Released: July 12, 2013)
>Improved IE 11 integration (Windows 8.1)
>Added support for new youtube changes
>Fixed bugs

Download Link
          YI 4K+ Action Camera Unboxing Review @YItechnology        
YiTechnology.com Lens 7 layers of glass lens, 155° FOV, F2.8, f=2.66±5%mm LCD Screen 2.2” touch screen, 640*360 screen resolution at 330PPI, 250 cd/m2 brightness, 16:9. Main Processor Ambarella H2 chipset...
          Recipe of the Day: Honeydew Lime Cooler        
Honeydew Lime Cooler Ingredients 4-1/2 cups cubed honeydew (about 1 small melon) 1-1/2 cups lime sherbet 2 tablespoons lime juice 5 fresh strawberries Directions Place melon cubes in a 15-in. x 10-in. x 1-in. baking pan; cover and freeze until firm, about 15 minutes. Set aside five melon cubes. In a food processor, combine the […]
          ESM's QuickLessons A DearMYRTLE Genealogy Study Group Lesson 20        

Hilary Gadsby


QuickLesson 20: Research Reports for Research Success
Elizabeth Shown Mills, “QuickLesson 20: Research Reports for Research Success," Evidence Explained: Historical Analysis, Citation & Source Usage (https://www.evidenceexplained.com/content/quicklesson-20-research-reports-research-success  :  accessed 17 Sept 2016).

This week we will be discussing the research process.

How do we do our research?

How should we do our research?

Can we improve how we research?

With the growth of the internet how many of us can find ourselves joining in with the quick click genealogy we frequently criticise.

Why do we criticise this way of doing things?

  1. Insufficient preparation
  2. Poorly recorded
  3. Insufficient analysis
So what should we be doing?
Ask yourself these questions.
  1. What do I want to find?
  2. Where should I be doing my research?
  3. How am I going to do the research?
  4. How am I going to record what I find?
  5. How am I going to review what I find?
We can use pen and paper or our computers to assist in these tasks.

We know how to interrogate the databases online and how to enter our results in our software program. There is plenty of information to tell us how to do this either digital or paper.

Do any programs tell us what we need to look for?

Do any programs tell us whether what we find is relevant?

Poor preparation and lack of analysis can lead to hours of wasted research.

How can we know what we need to find if we have not analysed what we already know.

Creating a research plan will be the best thing you do. It will keep you on track. 

If we wish to move on from being just "information gatherers and processors" as ESM states in this lesson we must consider how we approach our work.

This weekend I came across an individual who had been recorded by another researcher in the Wiki Tree website with the maiden name of ROSLING. However this was not the surname for the parents. The link to the 1911 census revealed that she was recorded as their adopted daughter. There was also a link to an army record showing her date of birth in keeping with the census record.
I am researching the surname ROSLING and was interested in knowing where she fitted in the lineage I am constructing.
If I just entered her name in a search would I find anything and how would I know if what I found was relevant.

Experienced researchers will often know exactly where to research and which records may help them find what is available. This does not preclude them from the planning stages but it may reduce the time needed to formulate the plan. Even the experts find themselves stumped occasionally and have to consider alternative strategies. Researching in a new area be it geographical or an unfamiliar set of records may require a different skill set and a whole new learning experience. If we are to complete a thorough research we have to be aware of the resources available. 

Even the best plans may need to be altered in the light of new information. Being prepared and analysing what has been found may alter our focus or the manner in which we carry out our research.
The ability to plan and analyse helps us make better use of the research time.

Complex questions may only be answered if we look at all the information we have and understand what it's telling us. 
Some researchers have found that a program such as Evidentia can help them formulate a plan for these complex problems. By entering each piece of information deciding what it is saying and importantly how reliable that information may be we have a clearer understanding of what we already know. 
The source of any information may be flawed. Awareness of the reliability and being able to resolve conflicting information are analysis skills that may only come threw experience and education. 
Learning from others and sharing personal experience helps each of us become better researchers by improving the knowledge base.

Do we read any accompanying information about a record group that we find online before we enter a name in the search box. If not, why not, surely we need to know if the record will be likely to provide us with the information we need before we search. Would you travel miles to an archive or cemetery without checking that they have what you are looking for first. The same should be true for online records. Finding information and blindly entering it into a database is as boring and pointless as writing lines was as a school punishment. If you want the reward of finding that elusive connection you need to spend time preparing and analysing, formulate a plan, familiarise yourself with what may be available, pinpoint the best way to approach the task and adapt the plan as and when more information is discovered. Not forgetting that negative results do not mean negative evidence, it may be that any record has just not survived.

As we near the end of this study group, we need to pull together all that we have discussed.

I am writing about my research mentioned above on my One Name Study blog. I have not included specific examples this week as I believe that this lesson is more about understanding the process and the importance of doing this well. 
Only we as individuals know whether we have been disciplined in the past.
Hopefully our discussions may have helped at least one of those watching to become researchers rather than gatherer/processors.

Researching when few records or indexes were available online and internet access was expensive.  
I was not aware of research plans so I would go armed with notes that I had made to guide my research. 
Whilst looking for ancestors in the BMD indexes on microfiche I would have a name, range of years, and geographical area. When I found a possible candidate I would record and order a certificate. 
The only way I could access the census was using indexes and then when I could get to the local archive I would have to scroll through the microfilm to find what I wanted. 
The internet has made finding many records easier but has it also created a group of individuals who may believe the adverts that show families building trees using only the online website. 
No website will ever contain all the records and whilst the records support our research they are not the researcher. 
Who pieces together which record is relevant to each individual, who is related to who and how are all these individuals related, it is us as researchers who analyse the information and decide its relevance.

The reporting suggested by Elizabeth Shown Mills may sound quite prescriptive and academic and unless you have an academic background you may switch off at the thought of report writing. However what she is saying is this. 

  1. Compile your findings complete with the information needed to find them again. 
  2. Collect them together in a manner that you are comfortable working with or that fits with your findings.
  3. Summarize what you have found.
  4. Decide whether you have answered your research question.
  5. Decide whether you need to do more research and create a new research plan.
  6. Make a conclusion and write a reasoned report to support this.
Personally I would say that Evidentia will help you do all of these in a guided way.

Finally here is a link to a Google Sheet I created called The Family History Research Process. It contains links to documents that others may find useful. Please add your comments if you think I may have missed something useful that could be added.

          February 2012 Daring Cooks' Challenge: Flipping Fried Patties!!!        
Hi it is Lisa and Audax and we are hosting this month's Daring Cooks' challenge we have chosen a basic kitchen recipe and a basic cooking technique which can be adapted to suit any ingredient that you have to hand and are beloved by children and adults alike … of course we are talking about patties.
Photobucket
Technically patties are flatten discs of ingredients held together by (added) binders (usually eggs, flour or breadcrumbs) usually coated in breadcrumbs (or  flour) then fried (and sometime baked). Burgers, rissoles, croquettes, fritters, and rösti are types of patties as well.

Irish chef Patrick "Patty" Seedhouse is said to have come up with the original concept and term as we know it today with his first production of burgers utilizing steamed meat pattys - the pattys were "packed and patted down" (and called pattys for short) in order to shape a flattened disc that would enflame with juices once steamed.

The binding of the ingredients in patties follows a couple of simple recipes (there is some overlap in the categories below)
Patties – patties are ingredients bound together and shaped as a disc.
Rissoles and croquettes – use egg with breadcrumbs as the binder, typical usage for 500 grams (1 lb) of filling ingredients is 1 egg with ½ cup of breadcrumbs (sometimes flour, cooked grains, nuts and bran can be used instead of the breadcrumbs). Some meat patties use no added binders in them they rely on the protein strands within the meat to bind the patty together.  Vegetarian and vegan patties may use mashed vegetables, mashed beans, grains, nuts and seeds to bind the patty. Generally croquettes are crumbed (breaded) patties which are  shallow- or deep-fried. Rissoles are not usually crumbed (but can be) and are pan- or shallow-fried. Most rissoles and croquettes can be baked.  (Examples are all-meat patties, hamburgers, meat rissoles, meatloaves, meatballs, tuna fish and rice patties, salmon and potato rissoles, most vegetable patties.)
Wet Fritters – use flour, eggs and milk as the binder, typical usage for 500 grams (1 lb) of filling ingredients is 2 cups flour, 1 egg with 1 cup of milk and are usually deep-fried and sometimes pan-fried  (examples deep fried apple fritters, potato fritters, some vegetable fritters, hushpuppies)
Dry Fritters – use eggs and (some) flour as the binder, typical usage for 500 grams  (1 lb) of filling ingredients is 1 to 2 eggs and (usually) some 2 to 8 tablespoons of flour (but sometimes no flour) and are pan- or shallow- fried. (examples most vegetable patties like zucchini fritters, Thai fish cakes, crab cakes, NZ whitebait fritters)
Röstis – use eggs (sometimes with a little flour) as the binder for the grated potato, carrot and other root vegetables, typical usage for 500 grams (1 lb) of filling ingredients is one egg yolk (potato rösti).

Sautéing, stir frying, pan frying, shallow frying, and deep frying use different amounts fat to cook the food. Sautéing uses the least amount of oil (a few teaspoons) while deep frying uses (many many cups) the most oil. The oil helps lubricate (sometimes adds flavour) the food being fried so it will not stick to the pan and helps transfer heat to the food being cooked.

In particular, as a form of cooking patties, pan- and shallow-frying relies on oil of the correct temperature to seal the surface (so retaining moisture) and to heat the interior ingredients (so binding them together) so cooking the patty. The exposed topside of the patty while cooking allows, unlike deep frying, some moisture loss and contact with the pan bottom with the patty creates greater browning on the contact surface that is the crust of the patty is browned and the interior is cooked by pan- and shallow-frying. Because the food is only being cooked on one side while being pan- or shallow-fried, the food must be flipped at least once to totally cook the patty.

So this month's challenge is to pan- or shallow-fry a patty, so giving us the title for this challenge “flipping fried patties”.

This challenge will help you understand how to form, what binders to use, and how to fry a patty so that it is cooked to picture perfect perfection.

Recipe Source:  Audax adapted a number of popular recipes to come up with the challenge patty recipes and Lisa has chosen to share two recipes – California Turkey Burger adapted from Cooking Light Magazine, and French Onion Salisbury Steak adapted from Cuisine at Home magazine.

Blog-checking lines:  The Daring Cooks’ February 2012 challenge was hosted by Audax & Lis and they chose to present Patties for their ease of construction, ingredients and deliciousness!  We were given several recipes, and learned the different types of binders and cooking methods to produce our own tasty patties!

Posting Date:  February 14th, 2012

Download the printable .pdf file HERE



Notes:
     
  • Binders
  •  
  • Eggs – are found in most patty recipes it acts as a binder, use cold eggs and lightly beat them before using  If you cannot use eggs try this tip  "1/4 cup of silken tofu, blended, or a commercial egg re-placer powder mixed with warm water."
  •  
  • Flour – normal plain (all-purpose) flour is used in most fritter recipes it can be replaced with rice, corn or potato flours (in smaller quantities) in some recipes. If you want some rise in your patties then use self-raising flour or add some baking powder to the flour. 
  •  
  • Breadcrumb Preparation – breadcrumbs are a common ingredient in patties, burgers and fritters they act as a binding agent, ensuring the patty keeps it shape during the cooking process.
  •  
    • Fresh breadcrumbs – these crumbs are made at home with stale bread simply remove the crusts from one- or two-day old bread, break bread into pieces, place pieces in a blender or food processor then blend or process until fine. Store any excess in a plastic bag in the freezer. 1 cup of fresh crumbs = 3 slices of bread.
    •  
    • Packaged breadcrumbs – often called dry breadcrumbs, these are used to make a crisp coating on the burgers, patties and fritters they are easily found in the supermarket, You can make them at home. Place slices of one- or two-day bread on baking trays, bake in the oven on the lowest setting until slices are crisp and pale brown. Cool bread, break pieces in a blender or food processor then blend or process until fine. 1 cup fine dry breadcrumbs = 4 slices of bread.
     
  • Alternate binders – bran (oat, wheat, rice, barley etc) can be used instead of breadcrumbs in most recipes. Tofu (silken) can replace the egg. Also using mashed potato (or sweet potato, carrots, most root vegetables) and/or mashed beans can help bind most patties. Of course chickpea flour and most other flours can be used to help bind patties. Seeds, nuts and grains can help bind a patty especially when the patty has cooled after cooking. These binders are used in vegan recipes.
  •  
  • Moisteners – Mayonnaise and other sauces, pesto and mustard are used in some meat patty recipes mainly for moisture and flavour but they can act as binders as well. For vegetable patties you can use chopped frozen spinach, shredded carrots, shredded zucchini, shredded apple and cooked grains to add extra moisture. Also sour cream and other milk products are used to increase the tenderness of patties.

     
  • Patty Perfection
  •  
  • When making meat patties the higher the fat content of the meat, the more the patties shrink during cooking this is especially true for ground (minced) red meat. Make patties larger than the bun they are to be served on to allow for shrinkage.
  •  
  • For hamburgers keep the fat content to about 20 - 30% (don't use lean meat) this ensures juicy patties when cooked. Also use coarse freshly ground meat (if possible) to make patties, if the mixture is ground too fine the large patties will break apart since the protein strands are too short and are covered in fat and can only bind to nearby ingredients so when the large patty is cooked it will fall apart or be too dense. Compare this behaviour with small amounts of finely ground lean meat (almost a paste) where the protein can adhere to itself (since the protein chains are short, not covered in fat and all the ingredients are nearby) hence forming a small stable patty (lamb kofta, Asian chicken balls, prawn balls).
  •  
  • Patty mixtures should be kept cold as possible when preparing them and kept cold until you  cook them the cold helps bind the ingredients together.
  •  
  • Don't over-mix the ingredients the resultant mixture will be heavy and dense.
  •  
  • For meat patties chop, mince, grate the vegetable ingredients fairly finely, if too coarse the patties will break apart.
  •  
  • Patties made mostly of meat (good quality hamburgers and rissoles) should be seasoned just before the cooking process, if salted too early liquid can be drawn out of the patty.
  •  
  • Make all the patties the same size so they will cook at the same rate. To get even-sized patties, use measuring cups or spoons to measure out your mixture.
  •  
  • For patties use your hands to combine the ingredients with the binders, mix gently until the mixture comes cleanly from the sides of the mixing bowl. Test that the final mixture forms a good patty (take a small amount in your palm and form into a ball it should hold together) before making the whole batch. Add extra liquid or dry binder as needed. Cook the test patty to check for seasoning, add extra if needed then cook the rest of the batch. 
  •  
  • Usually patties should be rested (about an hour) before cooking they “firm” up during this time, a good technique to use if your patty is soft. Always wrap patties they can dry out if left in the fridge uncovered.
  •  
  • Dampen your hands when shaping patties so the mixture won't stick to your fingers.
  •  
  • If making vegetable patties it is best to squeeze the grated/chopped/minced vegetables to remove any excess liquid this is most important for these types of patties.
  •  
  • When making fritters shred your vegetables because it makes long strands that gives a strong lattice for the patties. A food processor  or a box grater is great to use here.
  •  
  • For veggie patties make sure your ingredients are free of extra water. Drain and dry your beans or other ingredients thoroughly before mashing. You can even pat them gently dry with a kitchen cloth or paper towel.
  •  
  • Vegetable patties lack the fat of meat patties so oil the grill when BBQing them so the patty will not stick.
  •  
  • Oil all-meat burgers rather than oiling the barbecue or grill pan – this ensures the burgers don’t stick to the grill allowing them to sear well. If they sear well in the first few minutes of cooking they’ll be golden brown and juicy. To make it easy brush the burgers with a brush dipped in oil or easier still use a spray can of oil.
  •  
  • If you only have very lean ground beef try this tip from the Chicago Tribune newspaper  “To each 1 lb (½ kg) of ground beef add 2 tablespoons of cold water (with added salt and pepper) and 2 crushed ice cubs, form patties.” it really does work.
  •  
  • A panade, or mixture of bread crumbs and milk, will add moisture and tenderness to meat patties when the burgers are cooked well-done.
  •  
  • For vegetable patties it is best to focus on one main ingredient then add some interesting flavour notes to that major taste (examples carrot and caraway patties, beetroot, feta and chickpea fritters etc) this gives a much bolder flavour profile than a patty of mashed “mixed” vegetables which can be bland.
  •  
  • Most vegetable  and meat/vegetable patties just need a light coating of seasoned breadcrumbs. Lightly pat breadcrumbs onto the surface of the patty there is enough moisture and binders on the surface of the patty to bind the breadcrumbs to the patty while it is cooking. You can use wheatgerm, bran flakes, crushed breakfast cereals, nuts and seeds to coat the patty.
  •  
  • Use fine packet breadcrumbs as the coating if you want a fine smooth crust on your patties use coarser fresh breadcrumbs as the coating if you want a rougher crisper crust on your patty.
  •  
  • Flip patties once and only once, over-flipping the patty results in uneven cooking of the interior and allows the juices to escape.
  •  
  • Don't press the patties when they are cooking you'll squeeze out all of the succulent juices.
  •  
  • Rest patties a while before consuming.

     
  • Shaping the patty
  •  
  • Shaping – Shape the patty by pressing a ball of mixture with your clean hands it will form a disc shape which will crack and break up around the edges. What you want to do is press down in the middle and in from the sides, turning the patty  around in your hand until it is even and uniform. It should be a solid disc that is firm. Handle the mixture gently, use a light touch and don’t make them too compacted. Rather than a dense burger, which is difficult to cook well, aim for a loosely formed patty that holds together but is not too compressed.
  •  
  • Depressing the centre – When patties cook, they shrink (especially red meat burgers). As they shrink the edges tend to break apart causing deep cracks to form in the patty. To combat this you want the burger patty to be thinner in the middle than it is around the edges. Slightly depress the center of the patty to push a little extra mixture towards the edges. This will give you an even patty once it is cooked.  

     
  • Shallow- and pan-frying 
  •  
  • Preheat the pan or BBQ.
  •  
  • Generally when shallow-frying patties use enough oil that it comes halfway up the sides of the food. Best for most meat and vegetable patties and where the ingredients in the patty are uncooked.
  •  
  • Generally when pan-frying use enough oil to cover the surface of the pan best for most vegetable patties where all the ingredients are precooked (or cook very quickly) and all-meat rissoles and hamburgers.
  •  
  • Most oils are suitable for shallow- and pan-frying but butter is not it tends to burn. Butter can be used in combination with oil. Low-fat spreads cannot be used to shallow fry as they contain a high proportion of water. Rice bran oil is a great choice since it is almost tasteless and has a very high smoke point of 490°F/254°C. The smoke point is when the oil starts to break down into bitter fatty acids and produces a bluish smoke, Canola (smoke point 400°F/204°C) is also a great choice. Butter has a smoke point of 250–300°F/121–149°C. Olive oil Extra light 468°F/242°C. Olive oil Extra virgin 375°F/191°C. Ghee (Clarified Butter) 485°F/252°C.   
  •  
  • Do not overload the frying pan which allows steam to be trapped near the cooking food which might lead to the patties being steamed instead of fried. If you place too many patties at once into the preheated pan this reduces the heat and the patties will then release juices and begin to stew. Leave some space between each when you place them in the pan.
  •  
  • For most patties preheat the oil or fat until the oil seems to shimmer or a faint haze rises from it, but take care not to let it get so hot it smokes. If the oil is too cool before adding the patties, it will be absorbed by the food making the patty soggy. If the oil is too hot then the crumb coating will burn before the interior ingredients are cooked and/or warmed through. For vegetable and meat/vegetable patties start off cooking in a medium hot skillet and then reduce the heat to medium.  For all-meat patties start off cooking in a very hot skillet and then reduce the heat to hot, as celebrity chef Bobby Flay says that “the perfect [meat] burger should be a contrast in textures, which means a tender, juicy interior and a crusty, slightly charred exterior. This is achieved by cooking the meat [patty] directly over very hot heat, rather than the indirect method preferred for slow barbecues”. All patties should sizzle when they are placed onto the preheated pan.
  •  
  • Cast iron pans are best to fry patties.
  •  
  • When the raw patty hits the hot cooking surface it will stick. And will stay so until the patty crust forms so causing a non-stick surface on the patty at this point you can lift the patty easily without sticking. So wait until the patties (with a gentle shaking of the pan or a light finger-twist of the patty) release themselves naturally from the frying pan surface (maybe a minute or two for meat patties maybe 3-6 minutes for a vegetable patty).  If you try to flip it too early the burger will fall apart. The secret is to wait for the the patty to naturally release itself from the pan surface then flip it over once.
  •  
  • Veggie burgers will firm up significantly as they cool.
  •  
  • Most vegetable patties can be baked in the oven.
  •  
  • Check the temperature of the oil by placing a few breadcrumbs into the pan they should take 30 seconds to brown.
  •  
  • If you need to soak up excess oil place the patties on a rack to drain, do not place onto paper towels since steam will be trapped which can make the patty soggy, if you need to just press off the excess oil with paper towels then place onto a rack.



Mandatory Items: Make a batch of pan- or shallow-fried (or baked) patties.

Variations allowed:  Any variation on a patty is allowed. You can use the recipes provided or make your own recipe.

Preparation time:
Patties: Preparation time less than 60 minutes. Cooking time less than 20 minutes.

Equipment required:
Large mixing bowl
Large stirring spoon
Measuring cup
Frying pan

Basic Canned Fish and Rice Patties


Servings: makes about ten ½ cup  patties
Recipe can be doubled
adapted from http://www.taste.com.au/recipes/17181/tuna+rissoles

This is one my favourite patty recipes I make it once a week during the holidays. It is most important that you really mix and mash the patty ingredients well since the slightly mashed rice helps bind the patty together. 

Ingredients:
1 can (415 gm/15 oz) pink salmon or tuna or sardines, (not packed in oil) drained well
1 can (340 gm/13 oz) corn kernels, drained well
1 bunch spinach, cooked, chopped & squeezed dry or 60 gm/2 oz thawed frozen spinach squeezed dry
2 cups (300 gm/7 oz) cooked white rice (made from 2/3 cups of uncooked rice)
1 large egg, lightly beaten
about 3 tablespoons (20 gm/2/3 oz) fine packet breadcrumbs for binding
3 tablespoons (45 ml) oil, for frying
2 spring (green) onions, finely chopped
1 tablespoon (15 ml) tomato paste or 1 tablespoon (15 ml) hot chilli sauce
1 tablespoon (15 ml) oyster sauce
2 tablespoons (30 ml) sweet chilli sauce
Salt and pepper to taste
½ cup (60 gm/2 oz) seasoned fine packet bread crumbs to cover patties

Directions:
1) Place all of the ingredients into a large bowl.
2) Mix and mash using your hands or a strong spoon the ingredients with much force (while slowly adding tablespoons of breadcrumbs to the patty mixture) until the mixture starts to cling to itself about 4 minutes the longer you mix and mash the more compacted the final patty.  Day-old cold rice works best (only needs a tablespoon of breadcrumbs or less) but if the rice is hot or warm you will need more breadcrumbs to bind the mixture. Test the mixture by forming a small ball it should hold together. Cook the test ball adjust the seasoning (salt and pepper) of the mixture to taste.   
3) Form patties using a ½ cup measuring cup.
4) Cover in seasoned breadcrumbs.
5) Use immediately or can be refrigerated covered for a few hours.
6) Preheat fry pan (cast iron is best) to medium hot add 1½ tablespoons of oil and heat until the oil shimmers place the patties well spaced out onto the fry pan lower heat to medium.
7) Pan fry for about 3 minutes each side for a thin lightly browned crust about 10 minutes for a darker thicker crisper crust. Wait until the patties can be released from the pan with a shake of the pan or a light turning of the patty using your fingers before flipping over to cook the other side of the patty add the remaining 1½ tablespoons of oil when you flip the patties. Flip only once. You can fry the sides of the patty if you want brown sides on your patty.

Pictorial Guide
Some of the ingredients
Photobucket

Starting to mix the patty mixture           
Photobucket

About ready to be tested
Photobucket

The test ball to check if the mixture will hold together
Photobucket

Form patties using a ½ cup measuring cup
Photobucket

Crumb (bread) the patties                   
Photobucket

Cover and refrigerate


Preheat frying pan add oil wait until the oil shimmers add patties well spaced out onto the pan
Photobucket

Wait until the patties can be released by a light shaking of the pan or by finger-turning the patty and then flip the patties over add some extra oil (these were fried for 10 minutes)
Photobucket

Enjoy picture perfect patties
Photobucket

This patty was pan-fried on my cast iron fry pan notice the shiny very crisp crust as compared to the patty above
Photobucket

Zucchini, prosciutto & cheese fritters


Servings: makes about 8-10 two inch (five cm) fritters
Recipe can be doubled
adapted from http://smittenkitchen.com/2011/08/zucchini-fritters/

This makes a great light lunch or a lovely side dish for dinner. 

Ingredients:
500 gm (½ lb) zucchini (two medium)
1 teaspoon (5 ml) (7 gm) salt
½ cup (120 ml) (60 g/2 oz) grated cheese, a strong bitty cheese is best
5 slices (30 gm/1 oz) prosciutto, cut into small pieces
½ cup (120 ml) (70 gm/2½ oz) all-purpose (plain) flour plus ½ teaspoon baking powder, sifted together
2 large eggs, lightly beaten
2 spring onions, finely chopped
1 tablespoon (15 ml) chilli paste
1 teaspoon (5 ml) (3 gm) black pepper, freshly cracked
2 tablespoons (30 ml) oil, for frying

Directions:
     
  • Grate the zucchini with a box grater or food processor. Place into large bowl, add salt, wait 10 minutes.
  •  
  • While waiting for the zucchini, pan fry the prosciutto pieces until cooked. Remove from pan and place prosciutto onto rack this will crisp up the prosciutto when it cools. Paper towels tend to make prosciutto soggy if left on them.
  •  
  • When zucchini is ready wrap in a cloth and squeeze dry with as much force as you can you will get a lot of liquid over ½ cup, discard liquid it will be too salty to use.
  •  
  • Return dried zucchini to bowl add prosciutto, cheese, pepper, sifted flour and baking powder, chilli paste, pepper, a little salt and the lightly beaten eggs.
  •  
  • Mix until combined if the batter is too thick you can add water or milk or another egg, if too wet add some more flour. It should be thick and should not flow when placed onto the frying pan.
  •  
  • Preheat a frying pan (cast iron is best) until medium hot, add 1/3 of the oil wait until it shimmers.
  •  
  • Place dollops of batter (about 2 tablespoons each) onto the fry pan widely spaced out, with the back of a spoon smooth out each dollop to about 2 inches (5 cm) wide, do not make the fritters too thick. You should get three or four fritters in the average-sized fry pan. Lower heat to medium
  •  
  • Fry for 3-4 minutes the first side, flip, then fry the other side about 2-3 minutes until golden brown.  Repeat for the remaining batter. Adding extra oil as needed.
  •  
  • Place cooked fritters into a moderate oven on a baking dish for 10 minutes if you want extra crispy fritters.


Pictures of process – fresh zucchini, grated zucchini, liquid released from salted and squeezed dry zucchini, ingredients for the fritters, fritter batter and frying the fritters.
Photobucket

Cooked fritters
Photobucket

California Turkey Burger


Servings: makes about 10 burgers
Recipe can be doubled
adapted from Cooking Light Magazine September 2005:
http://www.myrecipes.com/recipe/california-burgers-10000001097016/

Sauce:
½ cup (120 ml) ketchup
1 tablespoon (15 ml) Dijon mustard
1 tablespoon (15 ml) fat-free mayonnaise

Patties:
½ cup (120 ml) (60 gm/2 oz) finely chopped shallots
¼ cup (60 ml) (30 gm/1 oz) dry breadcrumbs
1 teaspoon (5 ml) (6 gm) salt
1 teaspoon (5 ml) Worcestershire sauce
¼ teaspoon (¾ gm) freshly ground black pepper
3 garlic cloves, minced
1¼ lbs (600 gm) ground turkey
1¼ lbs (600 gm) ground turkey breast
Cooking spray

Remaining ingredients:
10 (2-ounce/60 gm) hamburger buns
10 red leaf lettuce leaves
20 bread-and-butter pickles
10 (1/4-inch thick/5 mm thick) slices red onion, separated into rings
2 peeled avocados, each cut into 10 slices
3 cups (750 ml) (60 gm/2 oz) alfalfa sprouts

Directions:
1. Prepare the grill to medium-high heat.
2. To prepare sauce, combine first 3 ingredients; set aside.
3. To prepare patties, combine shallots and the next 7 ingredients (through turkey breast), mixing well. Divide mixture into 10 equal portions, shaping each into a 1/2-inch-thick (1¼ cm thick) patty. Place patties on grill rack coated with cooking spray; grill 4 minutes on each side or until done.
4. Spread 1 tablespoon sauce on top half of each bun. Layer bottom half of each bun with 1 lettuce leaf, 1 patty, 2 pickles, 1 onion slice, 2 avocado slices, and about 1/3 cup of sprouts. Cover with top halves of buns.                                                                                                         

Photobucket

Yield:  10 servings (serving size: 1 burger) - Nutritional Information – CALORIES 384(29% from fat); FAT 12.4g (sat 2.6g,mono 5.1g,poly 2.8g); PROTEIN 31.4g; CHOLESTEROL 68mg; CALCIUM 94mg; SODIUM 828mg; FIBER 3.9g; IRON 4mg; CARBOHYDRATE 37.5g
Lisa’s Notes:
Nutritional information provided above is correct for the recipe as written.  When I make these burgers, the only ingredients I change are using regular mayo, and dill pickles.  My red lettuce of choice is radicchio.  I’ve both grilled and pan fried these burgers and both are delicious.  If you decide to pan fry, you’ll need a little extra fat in the pan – so use about 2 tsp. of extra virgin olive oil, or canola oil before laying your patties on the pan.  Cook for approximately 5 minutes on each side, or until done.  Do not overcook as the patties will dry out and not be as juicy and tasty! :)

French Onion Salisbury Steak


Courtesy of Cuisine at Home April 2005 edition
Makes 4 Steaks; Total Time: 45 Minutes

Ingredients:
1 1/4 lb (600 gm) ground chuck 
1/4 cup (60 ml) (30 gm/1 oz) fresh parsley, minced
2 tablespoons (30 ml) (⅓ oz/10 gm) scallion (spring onions), minced
1 teaspoon (5ml) (3 gm) kosher salt or ½ teaspoon (2½ ml) (3 gm) table salt
1/2 teaspoon (2½ ml) (1½ gm) black pepper
2 tablespoons (30 ml) (½ oz/18 gm) all-purpose (plain) flour
2 tablespoons (30 ml) olive oil
2 cups (240 ml) (140 gm/5 oz) onions, sliced
1 teaspoon (5 ml) (4 gm) sugar
1 tablespoon (15 ml) (⅓ oz/10 gm) garlic, minced
1 tablespoon (15 ml) (½ oz/15 gm) tomato paste
2 cups (240 ml) beef broth
1/4 cup (60 ml) dry red wine
3/4 teaspoon (2 gm) kosher salt or a little less than ½ teaspoon (2 gm) table salt
1/2 teaspoon  (2½ ml) (1½ gm) dried thyme leaves
4 teaspoons (20 ml) (⅓ oz/10 gm) fresh parsley, minced
4 teaspoons (20 ml)  (2/3 oz/20 gm) Parmesan cheese, shredded

Cheese Toasts
4 slices French bread or baguette, cut diagonally (1/2" thick) (15 mm thick)
2 tablespoons (30 ml) (30 ml/1 oz) unsalted butter, softened
1/2 teaspoon (2½ ml) (2 gm) garlic, minced
Pinch of paprika
1/4 cup (60 ml) (30 gm/1 oz) Swiss cheese, grated (I used 4 Italian cheese blend, shredded)
1 tablespoon (15 ml) (⅓ oz/10 gm) Parmesan cheese, grated

Directions:
1. Combine chuck, parsley, scallion, salt and pepper. Divide evenly into 4 portions and shape each into 3/4"-1" (20-25 mm) thick oval patties. Place 2 tablespoons flour in a shallow dish; dredge each patty in flour. Reserve 1 teaspoon flour.
2. Heat 1 tablespoon oil in a sauté pan over medium-high heat. Add patties and sauté 3 minutes on each side, or until browned. Remove from pan.
3. Add onions and sugar to pan; sauté 5 minutes. Stir in garlic and tomato paste; sauté 1 minute, or until paste begins to brown. Sprinkle onions with reserved flour; cook 1 minute. Stir in broth and wine, then add the salt and thyme.
4. Return meat to pan and bring soup to a boil. Reduce heat to medium-low, cover and simmer 20 minutes.
5. Serve steaks on Cheese Toasts with onion soup ladled over. Garnish with parsley and Parmesan.

For the Cheese Toasts
6. Preheat oven to moderately hot 200°/400ºF/gas mark 6.
7. Place bread on baking sheet.
8. Combine butter, garlic and paprika and spread on one side of each slice of bread. Combine cheeses and sprinkle evenly over butter. Bake until bread is crisp and cheese is bubbly, 10-15 minutes.

French Onion Salisbury Steak
Photobucket

Potato Rösti


Servings: makes two large rösti
adapted from a family recipe

The classic rösti; cheap, easy and so tasty.

Ingredients:
1 kg (2½ lb) potatoes
1 teaspoon (5 ml) (6 gm) salt
2 teaspoons (10 ml) (6 gm) black pepper, freshly milled
1 large egg, lightly beaten
2 tablespoons (30 ml) (½ oz/15 gm) cornflour (cornstarch) or use all-propose flour
3 tablespoons (45 ml) oil, for frying

Directions:
     
  1. Grate lengthwise the peeled potatoes with a box grater or a food processor.
  2.  
  3. Wrap the grated potato in a cloth and squeeze dry, you will get a lot of liquid over ½ cup, discard liquid since it is full of potato starch.
  4.  
  5. Return dried potato to bowl add the egg, cornflour, pepper, and salt.
  6.  
  7. Mix until combined.
  8.  
  9. Preheat a frying pan (cast iron is best) until medium hot, add 2 teaspoons of oil wait until oil shimmers.
  10.  
  11. Place half of mixture into the pan, flatten with a spoon until you get a smooth flat surface. Lower heat to medium.
  12.  
  13. Fry for 8-10 minutes (check at 6 minutes) the first side, flip by sliding the rösti onto a plate then use another plate invert the rösti then slide it back into the pan, then fry the other side about 6-8 minutes until golden brown. Repeat to make another rösti


Pictures of process – Peel 1 kg spuds, grate lengthwise, squeeze dry, add 1 egg, 2 tablespoons starch, salt and pepper. Pan fry.
Photobucket

Pictures of the grated potato before (left) and after (right) squeezing dry. Notice in the left hand pictures the gratings are covered in moisture and starch, while in the right hand pictures the grated potato is dry and doesn't stick together.
Photobucket

Pictures of the finished small rösti
Photobucket

Pictures of the large rösti
Photobucket

Chicken, potato and corn patties
I had some leftover chicken legs and boiled potatoes from dinner last night so I made up some patties. The patties are made from 1 kilogram of finely grated cold boiled potatoes, 4 chicken legs meat removed and finely chopped, and one can of corn kernels. The binder was one egg and 1/4 cup of self-raising wholewheat flour.

The crumbed (breaded) patties waiting to be pan fried
Photobucket

Patties pan frying
Photobucket

The finished patties
Photobucket
Photobucket

Meatballs
Photobucket
Photobucket
Photobucket

I made meatballs using high quality ground veal and pork (30% fat) I didn't use any binders in the mixture just a little seasoning chilli, garlic and dried mushroom powder.

The meatballs waiting to be fried
Photobucket

Frying the meatballs
Photobucket

The finished meatballs
Photobucket

Of course I made spaghetti and meatballs for dinner so so delicious
Photobucket

Thai Fish Cakes
Photobucket
Photobucket

I adore Thai fish cakes but I have never really made them I was surprised how simple it is if you have a very strong food processor. Basically you make a paste from 1/2 kg (1 lb) of white fillet fish (I used catfish (basa) fillets) with 1 egg and 6 tablespoons of flavourings (a combination of 1 Tbsp fish sauce, 1 tsp chilli, 2 Tbsp red curry paste, 1 Tbsp coconut cream, 1 Tbsp chilli crab flakes, 1/2 tsp sugar, 1/2 tsp salt, 1/2 tsp shrimp paste, a few spices), 6 kaffir lime leaves and 2 tablespoons cornflour (cornstarch) with a teaspoon of baking powder, you form small patties (each 2 tablespoons) from the paste and pan fry until cooked. These are just as good as the cafe ones I buy and only cost about 30 cents each instead of $1.90 at the cafe. A good basic recipe for Thai fish cakes is here http://thaifood.about.com/od/thaiseafoodrecipes/r/classicfishcakes.htm I added some extra baking powder and cornflour to the basic recipe since it makes the cakes rise and the interiors are light and fluffy. Super tasty and so cute.

Photobucket

Storage & Freezing Instructions/Tips:
Most rissoles, croquettes and dry fritters keep well for three or four days if covered and kept in the fridge. Uncooked and cooked rissoles and croquettes can be frozen for at least one month.

Additional Information: 
An index of Aussie patty recipes http://www.taste.com.au/search-recipes/?q=patties&publication=
An index of Aussie rissole recipes http://www.taste.com.au/search-recipes/?q=rissoles&publication=
An index of American patty recipes http://allrecipes.com/Search/Recipes.aspx?WithTerm=patty%20-peppermint%20-dressing&SearchIn=All&SortBy=Relevance&Direction=Descending
An index of American burger recipes http://busycooks.about.com/cs/easyentrees/a/burgers.htm 
A great vegetable and chickpea recipe http://www.exclusivelyfood.com.au/2006/06/vegetable-and-chickpea-patties-recipe.html
A baked vegetable patty recipe http://patternscolorsdesign.wordpress.com/2011/02/20/baked-vegetable-patties/
Vegetable patty recipes http://www.divinedinnerparty.com/veggie-burger-recipe.html
Best ever beet(root) and bean patty http://www.thekitchn.com/restaurant-reproduction-bestev-96967
Ultimate veggie burgers http://ask.metafilter.com/69336/How-to-make-awesome-veggie-burgers
One of best zucchini fritter recipes http://smittenkitchen.com/2011/08/zucchini-fritters/ 
Old School Meat rissoles http://www.exclusivelyfood.com.au/2008/07/rissoles-recipe.html
How to form a patty video http://www.youtube.com/watch?v=iHutN-u6jZc
Top 12 vegetable patty recipes http://vegetarian.about.com/od/veggieburgerrecipes/tp/bestburgers.htm
Ultimate Meat Patties Video http://www.chow.com/videos/show/youre-doing-it-all-wrong/55028/how-to-make-a-burger-with-hubert-keller
Beautiful vegetable fritters so pretty http://helengraves.co.uk/tag/beetroot-feta-and-chickpea-fritters-recipe/   
Information about veggie patties http://kblog.lunchboxbunch.com/2011/08/veggie-burger-test-kitchen-and-lemon.html  

Disclaimer:
The Daring Kitchen and its members in no way suggest we are medical professionals and therefore are NOT responsible for any error in reporting of “alternate baking/cooking”.  If you have issues with digesting gluten, then it is YOUR responsibility to research the ingredient before using it.  If you have allergies, it is YOUR responsibility to make sure any ingredient in a recipe will not adversely affect you. If you are lactose intolerant, it is YOUR responsibility to make sure any ingredient in a recipe will not adversely affect you. If you are vegetarian or vegan, it is YOUR responsibility to make sure any ingredient in a recipe will not adversely affect you. The responsibility is YOURS regardless of what health issue you’re dealing with. Please consult your physician with any questions before using an ingredient you are not familiar with.  Thank you! :)
          Oct 2011 Daring Bakers Challenge - Povitica        
A tale of two povitica loaves
PhotobucketPhotobucketPhotobucketPhotobucketPhotobucketPhotobucketPhotobucket

This month's challenge was to make povitica (a type of nut roll.

Blog-checking lines: The Daring Baker’s October 2011 challenge was Povitica, hosted by Jenni of The Gingered Whisk. Povitica is a traditional Eastern European Dessert Bread that is as lovely to look at as it is to eat!

This is my first time ever making this sort of recipe so I had absolutely no idea what to expect from the recipe. Well after doing some interesting internet research and ringing a pastry chef mate of mine whose mum is from Croatia and another friend's mum who is from Poland. I have some (little) understanding of the process and what to expect.

When comparing my notes with the information from my friends and their mums I found that povitica (or nut rolls) seems to be made by two slightly different methods that lead to two very dissimilar results; it seems that the “Northern European“ version (my name) is dense and moist like a firm bread-and-butter pudding, while the “Southern European” version is a well risen roll slightly less dense than the Northern version.

One major difference between the two versions is an hour of rising time before the final baking. Our challenge recipe only has ¼ hour of rising time before the final baking like a lot of Northern recipes while a typical Southern recipe has an hour of rising time before the final bake.

During my internet research I found that there are other differences; the Northern version uses a soft dough that is rolled out fairly thickly while the filling has a firmish consistency, while the Southern version uses a firmer dough that is rolled out very thinly while its filling has a consistency of thick honey. Since I was making two loaves (½ batch) anyway I thought I would do one loaf using the challenge instructions (which are very Northern) and do the other loaf using the Southern method. For both versions you make the dough layer as thin as possible.

A (Northern) povitica is meant to be dense and moist, it is important not to let the shaped roll rise too much before baking (in our challenge recipe you only let it rest for 15 minutes) in the other version you let the unbaked roll rise until doubled in volume then bake it.

I found that if you refrigerate the loaf until cold, it will slice thinly and cleanly, remember to serve it at room temperature. Also let the povitica rest for a few hours (a day is better) before cutting it this will help it set better so it can be sliced cleanly.

The biggest tip - If you find the dough is too springy let it rest.

Uhmmm, I don't know why but every stage of this recipe was an uphill battle.I used “00” soft flour (finely milled white flour 8% protein) for the recipe since I had it to hand and I thought it would make the stretching of the dough easier since “lower gluten” means “easier handling”.

For the nut filling I used about 300 grams (10½ ounces) walnuts and 250 grams (9 ounces) of mixed nuts, also I added 4 tablespoons of cocoa powder I wanted a chocolate hit from the Povitica. I used ¾ cup of white sugar and ¼ cup of dark brown sugar in the filling. And I used an unsalted “European” styled butter 87% fat since it had to used.

Dough – Firstly the size of the dough is amazing when you stretch it out, you will need to do it on a large table with a floured tablecloth. I found that the dough was very very hard to stretch it wanted to go back to its original shape that is every time I rolled it or stretched it out it would spring right back. From experience I know what to do in this situation I let the partially stretched out dough rest for about 15 minutes covered in plastic so the gluten strands in the dough would relax so making stretching a lot easier so after resting the dough I then proceeded to make a very thin layer of it … that is … after a lot of time doing guarded stretching and gentle man-handling … finally … I could see magazine print through the dough but this process took about 45 minutes. I think the problem was that I added too much flour while forming the dough, next time I will just have the dough a little tacky which will make it easier to stretch out. Also I will add ½ teaspoon lemon juice (for a ¼ batch) next time since the acidity helps to tenderise the dough so making it easier to stretch out. The second dough was a lot easier to roll out since by this time it had a lot more resting than the first dough it only took 15 minutes to roll out to phyllo (filo) sheet thinness. Looking back I should of added about 3 tablespoons of milk to get the correct consistency.

Filling - Firstly the filling seems like a huge amount but you need it all for the ½ batch its volume is almost 1 litres almost 4 cups. I found that the filling was much too stiff to spread out (I was using very dry nuts that could of been the problem?) on the thin dough layer without tearing it I had to add 4 tablespoons of warm milk and micro-wave to get it to the right consistency (like very thick honey). It is best to place tablespoon dollops of the filling evenly over the dough then spread these dollops evenly across the thin dough. After 20 minutes! of careful and methodical spreading the nut filling it was done. Of course the second version was a breeze to spread again I think resting time really helps the nut filling with spreading it over the thin dough sheet.I trimmed the edges and placed it into the baking pan such that the roll was coiled on itself I egg washed just after forming the unbaked loaf and once again just before baking.

I had given away for the long weekend my baking pans to a friend so I used my high loaf tin.I let one loaf rest for 15 mins then I baked it and the other loaf I let rise until doubled in volume then I bake it both were baked the same way (same temperatures and times). I'm sure that there is nothing wrong with the recipe I think I didn't let the dough rest enough for the first version and I added too much flour at the start.I have to say after all the troubles they both looked good, the loaf using the challenge instructions expanded about x2, the other version expanded about x2½ both had great colour and the crust dough layer for both was very thin so thin you could see the nut filling through it. And the colour was great so brown and shiny. Since the final baked loaf rises so much take this into account when you are shaping the loaf into the baking pan. I had a little trouble getting it out of the pan, so I recommend using parchment paper or butter and flour your baking pan well.

The dough starting to be mixed notice the foamy yeast mixture
Photobucket

How to tell if your dough is kneaded enough if you poke an indentation into the dough it should spring back I realise now that I should of added more liquid it should be tacky Photobucket
Photobucket

The huge amount of nut filling I used my food processor to make it this is the first time I used the machine since I bought it two years LOL LOL ago in this instance I thought it was worth the effort to clean the machine after the task
Photobucket

Stretching the dough to size … a pain to do in every sense of the word
PhotobucketPhotobucketPhotobucketPhotobucketPhotobucket

The baked Northern povitica
PhotobucketPhotobucketPhotobucketPhotobucketPhotobucket

The southern povitica
PhotobucketPhotobucketPhotobucketPhotobucket

If you want to do the recipe over two days I would do the nut filling and the challenge recipe up to step 7. that is make the dough and let it rise overnight in the refrigerator. Then the next day return the dough to room temperature (a couple of hours) and make the povitica as per the recipe. This sort of recipe freezes very well, freeze the baked loaf and thaw in the fridge overnight loosely covered in paper towels then cover in plastic wrap this stop the povitica from becoming soggy from condensation.

The verdict – the challenge (Northern) povitica is a really delicious nut roll with a very dramatic interior appearance, the texture of it is very similar to bread-and-butter pudding, very moist and 'firm-ish” to the tooth. While the “Southern” had great height it was a lot lighter in texture than the challenge recipe version still good. But I liked the challenge version much more the interior looked better and tasted better also. Overall I was very pleased though it was a frustrating process for the first version, though the second version was a breeze.

Comparison of the two loaves – on the left is the challenge version (which I call Northern) and on the right is the Southern version. As you can see very different looking results.
PhotobucketPhotobucket

Tips and hints (some of these are from the other bakers' experiences with this recipe I will add extra tips and hints during the month when others have posted their results)
1. It is very important to get the correct consistency for the dough and the nut filling if you do the process is a breeze. Remember when it comes to making bread -- recipes are guidelines, since flour absorbs moisture from the air so it is not unusual to add extra liquid or flour to get the correct consistency for the dough (in our case it should be slightly sticky) and depending on how old the nuts are and how the nuts are ground (this is highly variable for each baker) determines how the nuts absorb the liquid so again look at the consistency and adjust the liquid for the nut filling you want it to be like thick honey. I think this is the real lesson of this challenge, don't be afraid to adjust the liquid amounts to suit what you find in front of you in the mixing bowl!
2. Use plain (all purpose) flour. Use the flour sparingly when you mix the initial dough, it should be sticky don't be afraid to add liquid to get the correct consistency if you used too much flour. When you start mixing the dough it looks like that there isn't enough flour avoid adding any extra at this stage. It is best to mix the dough up (reserving some of the flour) and really give it a good working over it will be sticky (slap it down on the counter a few times and use a scraper to scoop it off the counter and knead it hard) it will be become less sticky while you knead it, that way you will use the least amount of flour.
3. Let the dough rise then punch it down and let it rest until it's pliable, if it is too springy let it rest longer.
4. Always check if your nuts are fresh and are not bitter tasting, ground nuts in a packet easily can be a year old. Fresh nuts give the best result leading to a lovely moist filling. Grind or process the nuts very finely if the nut pieces are too large they will break and tear the dough layer when you roll it up.
5. The consistency of the nut filling is like thick honey don't be afraid to add some liquid to get the correct consistency, micro-waving really helps make it spreadable.
6. The amount of time you let the roll rise just before baking leads to different results for the final baked povitica.
7. Roll up the povitica fairly tightly (using the floured sheet as your guide) so the final baked loaf will not fall apart and the layers will have a good pattern with no voids between the layers.
8. To check if the loaf is ready lightly knock the top of the roll it should sound hollow, or insert a skewer (or small thin knife) into the loaf for a slow count of three it should come out dryish and feel warmish if the skewer is wet or feels cool bake for a longer time don't over-bake since the filling will dry out making the final loaf dry so making the layers fall apart when the roll is cut into slices.
9. Leave the roll in the tin until it has cooled this helps firm it up so the roll will not collapse when you take it out of the pan recall the loaf weighs over 1 kg (2 lbs).
10. Let the roll rest for a few hours (better for a day) until completely cooled and set before cutting, if you refrigerate the loaf it will cut thinly and cleanly without crumbs, remember to serve the slices at room temperature. Makes great toast or even better French toast yum yum.
11. The loaf gets better and better the longer it matures in the refrigerator.

A few more tips and hints from Wolf who has made povitica every Christmas for many years, I put these here so they can be found easily by the forum members
A. Don't spread the filling right to the edges of the dough. You want to stay within at least 1/2 inch of the sides. This way, you can seal the filling inside and won't have leakage.
B. I use a stoneware bread pan to bake mine in. The one in the photo had the ends tucked underneath to the center, so it presented a smooth top. It was also rolled to the center from BOTH ends. That's how I got 4 distinct swirls. (See her exquisite povitica here)
C. Definitely cool the loaf in whatever you bake it in, until you can handle it with your bare hands, before turning it out onto a cooling rack to finish cooling. It slices cleaner when completely cooled or refrigerated.
D. Roll the dough tighter than you think you need to. Yes, some filling will squeeze out the ends, but you'll get a neater swirl in the center, less voids and gaps and it'll stay together better, as well as make it a nicer sliced bread for toasting or even french toast- which is awesome with this type of bread.
E. It will freeze well, especially if well wrapped- I've done one upwards of a month before. It does ship very well- I ship one loaf to my parents every Christmas and one to my In Laws, my recipe makes 3 full sized loaves and will last upwards of a week on the counter at room temp. - if it lasts that long in your house }:P

Wolf graciously included instructions to obtain her exquisite swirl patterned povitica for the method.

I have drawn some diagrams of the method

The stretched out dough layer covered with filling
Photobucket

Then roll each long edge to the center thus forming two swirls
Photobucket
Photobucket

Then take each end and fold them towards the middle of the roll (the brown line is where the ends finish up when folded) thus forming a double height roll
Photobucket
Photobucket

Then turn the loaf over and place into the pan so the seam ends are at the bottom of the pan which means the top is smooth and has no cut seams or edges
Photobucket

Txfarmer a very experienced and superb baker posted some great tips also
1) At first glance, since we need to stretch the dough to very thin, it seems to make sense not to knead the dough too much. Kneading == strong gluten == too elastic == hard to roll out/stretch. However, what we really need is a dough that can be stretched out WITHOUT BREAKING, that actually requires the dough to have strong gluten. I make breads a lot, from my past experience, I think the solution here is to have a wet (as wet as one can handle) dough that's kneaded fairly thoroughly. Wet doughs are more extensible, despite being kneaded very well. I kept the dough so wet that it was sticking to the mixer bowl at the end of kneading, however, a large transparent strong "windowpane" can be stretched out, which is the indication of strong gluten.
2) With the right dough, stretching out was easy, < 10mins of work. The dough was tough enough not to break, yet wet enough to be stretched out. I made quarter-size (i.e. one loaf), but the dough was stretched out to cover almost all of my coffee table. The tip of using a sheet underneath was very good. I used a plastic table cloth (lightly floured). In fact the dough was stretched so large that the filling was barely enough to cover it. 3) I proofed the dough longer than the formula suggests to get more volume, and the loaf less dense. I understand the authentic version is quite dense, but my family tends to like lighter fluffier loaves when it comes to sweet breads. 4) Since the dough was kneaded well, the final loaf had very good volume. Rose well above the rim in my 8.5X4.5inch pan. Poviticas for morning tea
I needed to make a treat for nibbles at a morning tea. So I decided to make two poviticas – one povitica filled with tea infused figs and almonds and the other filled with coffee infused dates, cocoa and hazelnuts. I wanted a strong contrast in the flavours between the two loaves. The tea/fig/almond filling was a lovely 'camel' colour its flavour was like caramelised fig on the palate each element was present I really liked how the tea melded with the fig and the almond this povitica was additively GOOD with tea. The other loaf had a very strong coffee/date base flavour while the cocoa and hazelnut added a lovely lingering after taste the winner for me. I was very very pleased with the filling flavours and how they tasted with tea or coffee. (Apart from the coffee infused date povitica looking like a baked chicken LOL LOL.) Those loaves were moist, very dense and incredible rich, perfect (when thinly sliced) with a cuppa. Feeds a lot of people! There were like very moist, ultra dense fruit cakes I thought hence the reason for very thin slices to be served with your choice of tea or coffee. Not recommended for children too much caffeine!

For this attempt I was careful about adding the flour and made sure that the finished dough was a little sticky, this time I found it a lot easier to stretch though the consistency wasn't exactly right I felt and I need to better understand how do to the spreading out of the filling and I haven't still mastered how the amount of filling as compared to the amount of stretched out dough needs to be in ratio, and also how to form a good pattern of swirls needs some thought so a lot of little things to practice for me over the next few weeks.

I will give this recipe another go since I want to perfect the process (making pretty interior patterns and getting the texture right) since these loaves would be a great Christmas present.

Tea infused figs with almonds
Photobucket
Photobucket

Coffee infused dates with hazelnuts (the finished loaf looks a little like a roasted chicken LOL)
Photobucket
Photobucket

Tea infused figs with almonds
375 grams (13 ounces) finely chopped dried figs
¾ cup of very very strong tea (I used 4 teabags of Earl Grey tea)
¾ cup of vanilla sugar
1 cup (120 grams) (4¼ ounces) ground almonds
2 large egg
½ cup clotting cream (66% butter fat)
Method – combine all the ingredients (except eggs and cream) in a small saucepan bring to boil and simmer gently for 10 minutes. Beat eggs and pour slowly into mixture, stirring constantly and simmer gently 5 minutes more. This mixture scorches easily, so heat must not be too high. Cool mixture add clotting cream. Place filling into a container and let rest overnight before using.

Coffee infused dates with hazelnuts
375 grams (13 ounces) of finely chopped dried dates
¼ cup (55 gm) (2 oz) unsalted butter, fried until nut brown
¾ cup of very very strong coffee (I used 1½ tablespoons of instant coffee)
½ cup of dark brown sugar
¼ cup of cocoa powder
1 cup (120 grams) (4¼ ounces) ground hazelnuts
2 large egg
¼ cup clotting cream (66% butter fat)
Method – combine all the ingredients (expect eggs and cream) in a small saucepan bring to boil and simmer gently for 10 minutes. Beat eggs and pour slowly into mixture, stirring constantly and simmer gently 5 minutes more. This mixture scorches easily, so heat must not be too high. Cool mixture add clotting cream. Place filling into a container and let rest overnight before using.
          Blast from Bob        
Bob gives me an update.  He is doing some interesting work with the car and taking it to the next level:
 I started to talk about the charger issue.  I had contacted the Zivan rep and got a cold shoulder from them.  I designed my own charger and got it working after a few blown transistors.  My goal was 135 volts max and 10 amps max and the charger achieved both of those goals.  The packaging was rather crude because I used a discarded chassis from some unknown piece of electronic gear.  It was so bulky that I was unable to close the hood while charging. Right after I got that thing working well, I stumbled upon the forum called diyelectriccar.  Under the heading about charging, I found other people unable to get Zivan to properly convert their chargers to lithium.  One person actually went to the extent of designing a bug to replace the existing processor.  He was willing to sell copies of this bug so I bought one.  He even included a sample program which he had used for his car.  I don't know much about programming, but since all I had to do was change a few lines of code to match my battery pack, I got the Zivan working for lithium! I put my design aside and just chalked it up to experience.  Another little problem I solved involved the lack of a reliable parking brake.  Using the chocks has been unhandy.  I lost one when I drove off without it.  The solution is a brake club.  Rather than describe it, I'll send a picture.  The club goes between the pedal and the base of the seat.  It goes over center and latches.  Another latch situation came up.  The 12v battery went down to the extent it wouldn't operate the relay to energize the main solenoid.  This is a situation which could leave you stranded.  This relay also connects the DC-DC converter from the main battery to the little one.  The emergency fix for this is to drill a small hole in the the side of the DC relay, turn on the key, and insert a toothpick into the hole against the relay armature and push.  The relay will hold because the small battery is being charged by the big battery.




I hope to get down there and film a ride sometime.  It would be quite a different ride, I think.
          Bangalore University Fourth Semester BCA Exams Question Papers        
Bangalore University Fourth Semester BCA Exams Question Papers

Question Papers 2015

IV Sem Visual Programming- Y2k8 Scheme
IV Sem Unix Programming- Y2k8 Scheme
IV Sem Language Sanskrit-IV
IV Sem Language Kannada-IV
IV Sem Data Communication and Networks -Y2K8 Scheme

Question Papers (2010-2013)

Computer Graphics Dec-09
Computer Graphics 2011-(O.S)
Computer Graphics-2013
Data Comm.IV Jun-2010
Data Communication Networks 2011
Data Communication and Networks 2011
Data Communications Networks 2013
English-2010
English-2013
Environmental Studies May-2011
Hindi-2011(N.S)
Hindi-2011
Hindi-2013
Hindi Jun-2010
Kannada Jun-2010
Kannada 2013
Kannada Part II 2011
Microprocessors 2013
Sanskrit Jun2010
Software Engineering
Software Engineering 2010
Software engineering 2011
System Program IV-Jun2010
System Programming 2011
System Programming 2013
System Software 2010
System Software 2011-(O.S)
System Software Dec-09
System Sofware 2013
Unix Operating System Dec-09
Unix Operating System May-2011
Unix Operating Systems 2013
Unix Programming Jun-2010
Unix Programming-2013
Unix Programming May-2011
Visual Programming Jun-2010
Visual Programming-2013
Visual Programming May-2011


          Moto G (Gen 2) Launched By Motorola Corporation in India at Rs. 12,999 |         
The Moto G (Gen 2) features a 5-inch HD (720x1280 pixel) display, and is powered by a 1.2GHz quad-core Snapdragon 400 (MSM8226) processor, coupled with 1GB of RAM and the Adreno 305 GPU.

The second-generation Moto G features an 8-megapixel autofocus rear camera, and a 2-megapixel front-facing camera. Notably, the smartphone - like the Moto E - features dual front speakers, above and below the display, apart from 2 mics.

(Also see: Moto 360 Smartwatch With 1.56-Inch Circular Display Officially Unveiled)

Connectivity options on the Moto G (Gen 2) include 3G, Wi-Fi 80211.ac, Bluetooth 4.0LE, Micro-USB, and 3.5mm audio jack. It has measures 141.5x70.7x10.99mm, and weighs 149
grams. It is powered by a 2070mAh battery.

Notably, Michael Adnani, Vice President - Retail & Head Strategic Brand Alliances, Flipkart, informed us at the event that Motorola had enjoyed astonishing sales via the e-commerce site in India, selling over 1.6 million units of its Moto devices (Moto E, Moto G and Moto X) in the past 7 months.
Motorola has launched the second generation of its popular Moto G and Moto X devices on Friday, called the Moto G (Gen 2) and Moto X (Gen 2). The Moto G (Gen 2) will be available from Friday (at midnight) via Flipkart, priced at Rs. 12,999 for the 16GB version.

Much to the surprise of fans across the globe, the company has launched the smartphones first at its event in India, despite confusing reports of a launch event on Thursday in Chicago. With the popularity of the Moto G and Moto E in India however, the country's importance to Motorola cannot be underestimated.

Both phones are powered by Android 4.4.4 KitKat, and the company has confirmed it will roll out an Android L update for the two devices - however, there was no mention made of such an update for the previous generation.

                           

(Also see: Moto X (Gen 2) Unveiled With 5.2-Inch Display and Snapdragon 801 SoC)

Motorola did mention that the previous generation Moto G, Moto X, and Moto E will remain available on Flipkart, but will be phased out with time.

The Moto G (Gen 2) will be available in 8GB and 16GB variants. The biggest addition to the smartphone compared to the first-generation Moto G is the inclusion of a microSD card slot. The company hasn't yet specified the maximum supported card capacity however.


           Nokia Lumia 730 Dual-SIM , Lumia 735 , Lumia 830 and Lumia 930 Mobile From Microsoft Corporation | Lumia 730 ,735 , 830 Features, Specifications & Price In India        
Microsoft Devices announced the rumoured Superman codenamed selfie phone. The Nokia Lumia 730 and Nokia Lumia 735 have similar looks and innards, the only difference is that the Nokia Lumia 730 is a Dual SIM handset supporting 3G connectivity and the Nokia Lumia 735 can handle a single LTE connection and can be charged wirelessly. The prices of the Lumia 730 and 735 are 199 and 219 Euros respectively. When they arrive in India in bright green, bright orange, dark grey and white colour variants at the end of this month, we can expect the prices to hover around Rs. 16000 and Rs. 18000 respectively.

Nokia Lumia 735.
Nokia Lumia 735​
 
The Nokia Lumia 730 Dual-SIM and Lumia 735 sport a 4.7inch OLED display with ClearBlack tech. The display is protected by Corning Gorilla Glass 3 and supports supersensitive touch. We were disappointed to see that Microsoft has left out Glance Screen in both these devices. Underneath we find a 1.2GHz quad-core Qualcomm Snapdragon 400 processor and 1GB of RAM running Windows Phone 8.1 Lumia Denim edition.

Nokia Lumia 730.
Nokia Lumia 730 Dual-SIM​

Microsoft says that both of these smartphones are “the ultimate selfie and Skype smartphones” and it isn’t kidding. The Lumia 730 and 735 have f/2.4 aperture wide angle 5MP sensor that can accommodate as many friends as you like. It also preloads the phones with its new Selfie app which not only lets you access the front camera quickly but also lets you add filters and skin enhancements and even lets you slim down your face and blur out the background. For select markets it is offering a free three-month Skype Unlimited World subscription that lets you make calls to mobile and landline phones. On the rear you find a 6.7MP auto-focus camera with BSI sensor and 4x zoom capable of recording Full HD videos at 30fps. The only foible is that the phone does not have a physical shutter button like its recently launched big brother.

The Nokia Lumia 730 and 735 have a smaller 8GB internal memory which can be expanded with the help of microSD cards of up to 128GB. It throws in 15GB of OneDrive cloud storage as well. Connectivity features include Wi-Fi, Bluetooth 4.0, NFC and microUSB. Like all Lumia family member you get free voice guided navigation and offline maps with the help of Nokia HERE maps service. The company claims that the user removable battery will last for 25 days on standby and offer 22 hours of talk time.

           Microsoft Launches New Nokia Lumia 730 Dual-SIM , Lumia 735 , Lumia 830 and Lumia 930 | Lumia 730 ,735 , 830 Features, Specifications & Price In India 2014        
Microsoft Devices announced the rumoured Superman codenamed selfie phone. The Nokia Lumia 730 and Nokia Lumia 735 have similar looks and innards, the only difference is that the Nokia Lumia 730 is a Dual SIM handset supporting 3G connectivity and the Nokia Lumia 735 can handle a single LTE connection and can be charged wirelessly. The prices of the Lumia 730 and 735 are 199 and 219 Euros respectively. When they arrive in India in bright green, bright orange, dark grey and white colour variants at the end of this month, we can expect the prices to hover around Rs. 16000 and Rs. 18000 respectively.


Nokia Lumia 735.
Nokia Lumia 735​

The Nokia Lumia 730 Dual-SIM and Lumia 735 sport a 4.7inch OLED display with ClearBlack tech. The display is protected by Corning Gorilla Glass 3 and supports supersensitive touch. We were disappointed to see that Microsoft has left out Glance Screen in both these devices. Underneath we find a 1.2GHz quad-core Qualcomm Snapdragon 400 processor and 1GB of RAM running Windows Phone 8.1 Lumia Denim edition.

Nokia Lumia 730.
Nokia Lumia 730 Dual-SIM​

Microsoft says that both of these smartphones are “the ultimate selfie and Skype smartphones” and it isn’t kidding. The Lumia 730 and 735 have f/2.4 aperture wide angle 5MP sensor that can accommodate as many friends as you like. It also preloads the phones with its new Selfie app which not only lets you access the front camera quickly but also lets you add filters and skin enhancements and even lets you slim down your face and blur out the background. For select markets it is offering a free three-month Skype Unlimited World subscription that lets you make calls to mobile and landline phones. On the rear you find a 6.7MP auto-focus camera with BSI sensor and 4x zoom capable of recording Full HD videos at 30fps. The only foible is that the phone does not have a physical shutter button like its recently launched big brother.

The Nokia Lumia 730 and 735 have a smaller 8GB internal memory which can be expanded with the help of microSD cards of up to 128GB. It throws in 15GB of OneDrive cloud storage as well. Connectivity features include Wi-Fi, Bluetooth 4.0, NFC and microUSB. Like all Lumia family member you get free voice guided navigation and offline maps with the help of Nokia HERE maps service. The company claims that the user removable battery will last for 25 days on standby and offer 22 hours of talk time.

          Comments: Акции, распродажи, купоны на 07.08-13.08.2017        
Lenovo P8
8.0 inch Android 6.0 Snapdragon 625

Купить с купоном "LenovoP8" за $154.99
Купонов только 50 шт.
www.gearbest.com/tablet-pcs/pp_641529.html

FNF Ifive Mini 4S Android 6.0 Tablet PC
Retina Screen 2G RAM 32G ROM 8.0MP Cameras

Купить за $119.99
Купонов только 100 шт.
www.gearbest.com/tablet-pcs/pp_602714.html?wid=11

Xiaomi Air 12 Laptop — 4GB RAM 128GB SSD
«12.5 inch Windows 10 Home Chinese Version 7th Gen Intel Core m3-7Y30
Processor»

Купить с купоном "NEWMIAIR12" за $489.99 нет доставки в России
www.gearbest.com/laptops/pp_625263.html?wid=11

CHUWI LapBook Windows 10 Laptop
«15.6 inch Windows 10 Notbook Intel Cherry Trail Z8350 Quad Core 1.44GHz 4GB
RAM 64GB ROM 10000mAh Battery HDMI Bluetooth 4.0 Camera WiFi»

Купить за $180
www.gearbest.com/laptops/pp_589380.html

Xiaomi Redmi 4X
MIUI 8 Snapdragon 435 4100mAh Battery

Купить с купоном "4X6GB" за $166
Купонов только 300 шт.
www.gearbest.com/cell-phones/pp_635898.html

Lenovo ZUK Z2 Pro
«5.2 inch Android 6.0 6GB RAM 128GB ROM Snapdragon 820 64bit Quad Core
2.15GHz 13MP + 8MP Cameras Type-C Bluetooth 4.1»

Купить с купоном "LZUKZ2" за $263.99
Купонов только 200 шт.
www.gearbest.com/cell-phones/pp_462205.html


          Comments: Акции, распродажи, купоны на 07.08-13.08.2017        
VOYO Q101 4G Phablet
10.1 inch Android 6.0 MT6753

Купить за $106.99
Купонов только 100 шт.
www.gearbest.com/tablet-pcs/pp_623365.html

GPD WIN PC Game Console
5.5 inch Windows 10 Intel Cherry Trail X7-Z8750

Купить за $365.99
Купонов только 100 шт.
www.gearbest.com/tablet-pcs/pp_624766.html

Lenovo TAB 2 A7-30 Android 4.4 Phablet
«1GB RAM 16GB ROM Android 4.4 7 inch WSVGA Screen Dual Camera Bluetooth 4.0
GPS WiFi»

Купить за $89.99
Купонов только 14 шт.
www.gearbest.com/tablet-pcs/pp_244581.html

Xiaomi Air 12 Laptop — 4GB RAM 256GB SSD
«12.5 inch Windows 10 Home Chinese Version 7th Gen Intel Core m3-7Y30
Processor»

Купить с купоном "MIAIR256" за $549.99
Купонов только 50 шт.
www.gearbest.com/laptops/pp_632454.html?wid=4

ILIFE A6 VACUUM CLEANER
Intelligent Remote Control Sweeping Robot Invisible Wall

Купить с купоном "ILIFEA6" за $199.99
www.gearbest.com/robot-vacuum/pp_625062.html

Xiaomi VIOMI 3.5L Water Filter Pitcher Filtration Dispenser Cup

Купить с купоном "VIOMI" за $42.99 новинка
www.gearbest.com/water-filter/pp_682019.html

Original English Version Xiaomi Mi WiFi Router 3 — EU PLUG
1167Mbps 802.11ac Dual Band MiWiFi APP Control with 4 Antennas

Купить с купоном "mirouterRU" за $26,99
Купонов только 10 шт. СУПЕР ЦЕНА!
www.gearbest.com/wireless-routers/pp_497233.html

Original Xiaomi Wireless Bluetooth 4.0 Speaker
Mini USB Amplifier Stereo Sound Box for iPhone 6S / 6S Plus / iPad Pro

Купить с купоном "mispeakerru" за $15.99
Купонов только 80 шт. СУПЕР ЦЕНА!
www.gearbest.com/speakers/pp_175673.html

Zidoo H6 Pro TV Box
«AllWinner H6 Quad-core Cortex-A53 + Bluetooth 4.1 + 4K VP9 H.265 + Android
7.0»

Купить с купоном "GBZH6P" за $89
Купонов только 50 шт. Действует до:2017-8-31
www.gearbest.com/tv-box/pp_687067.html


          Windows 10 will make old computers obsolete        
Just because your computer can run Windows 10 with aplomb right now, it doesn’t mean it’s going to get all the Windows 10 versions that Microsoft will launch — and the company won’t move past number 10 anytime soon, so they’ll all be Windows 10 versions. This shouldn't sound too surprising, as the same thing happens with Macs: Aging models get discontinued as macOS gets more sophisticated. But it turns out that some users rocking Intel Clover Trail Atom machines were surprised to discover an unpleasant message while trying to get the latest Windows 10 Creators Update installed. "Windows 10 is no longer supported on this PC," the message said, advising them to not proceed with the install. This may sound like a counterintuitive move for a company that annoyed users with Windows 10 update prompts. All of a sudden, the company seemed to be dropping support for some unlucky people. But, as PC World explains, Windows 10 support is not entirely a Microsoft-only matter. The company will only support a device for as long as the chipmaker keep supporting the processor that powers it. In this case, Intel dropped official support for Clover Trail CPUs, which means Microsoft had to kill support for all the computers based on Atom chips. “[These] systems are no longer supported by Intel... and without the necessary driver support, they may be incapable of moving to the Windows 10 Creators Update without a potential performance impact,” Microsoft says. So even if your Atom PC experience is still great, you can’t have Creators Update or any of the updates that will follow it. PC World rightly points out that Microsoft’s statement is worrying because of its broadness. The company may choose to suspend updates for any device as long as a particular component stops receiving official support from the manufacturer. But Microsoft did say it’s working with chip makers to find support for older hardware. It’s not like Microsoft wants to stop the growth of Windows 10 by preventing devices that could run the software from actually installing the latest updates. From a different point of view, if your computer’s chip is so old that it won’t receive official support anymore, it’s probably time to consider an upgrade.
          Call of Duty 4: Modern Warfare + Crack Full        

Call of Duty 4: Modern Warfare Full Crack
Release Date: 07-11-2007
Languages: English, French, German, Italian, Spanish 
Mirrors: PutLocker | UPaFile | Cyberlocker | BillionUploads
COPYRIGHT : KOS KOMPUTER

Free Download PC Game COD 4: MW Full Version - The new action-thriller from the award-winning team at Infinity Ward, the creators of the Call of Duty® series, delivers the most intense and cinematic action experience ever. Call of Duty 4: Modern Warfare arms gamers with an arsenal of advanced and powerful modern day firepower and transports them to the most treacherous hotspots around the globe to take on a rogue enemy group threatening the world. As both a U.S. Marine and British S.A.S soldier fighting through an unfolding story full of twists and turns, players use sophisticated technology, superior firepower and coordinated land and air strikes on a battlefield where speed, accuracy and communication are essential to victory. The epic title also delivers an added depth of multiplayer action providing online fans an all-new community of persistence, addictive and customizable play.

Screenshot

Minimum System Requirements
  • OS: Windows XP/Vista
  • Processor: Pentium 4 @ 2.4 GHz / AMD Athlon 2600+ or any Dual Core Processor @ 1.8 GHz
  • Memory: 512 Mb
  • Hard Drive: 8 Gb free
  • Video Card: nVidia 6600 / ATI Radeon 9800Pro
  • Sound Card: DirectX 9.0c Compatible
  • DirectX: 9.0c
  • Keyboard
  • Mouse
  • DVD Rom Drive

Recommended System Requirements
  • OS: Windows XP/Vista
  • Processor: Any Dual Core Processor 2.4 GHz or faster
  • Memory: 1 Gb
  • Hard Drive: 8 Gb free
  • Video Card: nVidia 7800 / ATI Radeon X1800
  • Sound Card: DirectX 9.0c Compatible
  • DirectX: 9.0c
  • Keyboard
  • Mouse
  • DVD Rom Drive

Update Link Download (14-05-2013)
Mirror via PutLocker
Mirror via UPaFile
Mirror via CyberLocker
Mirror via BillionUploads
250 MB / Part
    Password: koskomputer.blogspot.com

    Installation
    1. Unrar
    2. Open the .iso file with a program that can mount it (PowerISO, Daemon tools, Alcohol for example)
    3. Mount it. I don't have that autorun option turned on, so i have to go to My Computer and open it from the cd-drive.
    4. Install it. When it asks for the cd-key, minimize the installer and open rzr-cod4.exe, then generate the cd-key and copy it to the installer.

    ***Some antivirus propgrams may tell u that that rzr-cod4.exe is a virus of somekind. IT'S A FALSE POSITIVE! It's perfectly safe. The antivirus program thinks its a virus, but it isn't.***

    5. The installer will also ask if u wan't to install punkbuster. I installed it, don't know if it matters.
    6. When it's done installing,  copy or move iw3sp.exe to the game folder (where u installed the game).
    7. Launch the game by opening iw3sp.exe and it should work.
    8. Support the software developers. If you like this game, BUY IT!

    Info
    1. PL, UPa, CL, BU Interchangeable Links
    2. Total part: 10 / 700 GB
    3. Total file : 6.32 GB
              FIFA 13 INTERNAL-RELOADED        

    FIFA 13 INTERNAL-RELOADED
    Release Date: 7 Okt 2012
    Mirrors: PutLocker | UPaFile | Cyberlocker | BillionUploads
    Uploaded | Rapidgator | Turbobit
    COPYRIGHT : KOS KOMPUTER

    Free Download PC Game FIFA 2013 Full Version - captures all the drama and unpredictability of real-world football. This year, the game creates a true battle for possession across the entire pitch, and delivers freedom and creativity in attack. Driven by five game-changing innovations that revolutionize artificial intelligence, dribbling, ball control and physical play, FIFA 2013 represents the largest and deepest feature set in the history of the franchise.

    Features
    • All-new positioning intelligence infuses attacking players with the ability to analyze plays, and to better position themselves to create new attacking opportunities.
    • Make every touch matter with complete control of the ball. Take on defenders with the freedom to be more creative in attack.
    • A new system eliminates near-perfect control for every player by creating uncertainty when receiving difficult balls.
    • The second generation of the physics engine expands physical play from just collisions to off-the-ball battles, giving defenders more tools to win back possession.
    • Create dangerous and unpredictable free kicks. Position up to three attacking players over the ball and confuse opponents with dummy runs, more passing options, and more elaborate free kicks.
    • Compete for club and country in an expanded Career Mode that now includes internationals. Play for or manage your favorite national team, competing in friendlies, qualifiers and major international tournaments.
    • Learn or master the fundamental skills necessary to compete at FIFA 13 in a competitive new mode. Become a better player, faster, no matter what your skill level. Compete against yourself or friends in 32 mini-games perfecting skills such as passing, dribbling, shooting, crossing and more.
    • Earn rewards, level up, enjoy live Challenges based on real-world soccer events, and connect with friends. Everything within FIFA 13, and against friends, is measured in a meaningful way.
    • Access your Football Club identity and friends, manage your FIFA Ultimate Team, search the live auctions and bid to win new players.
    • 500 officially licensed clubs and more than 15,000 players.

    Release NOTE: It internal because the DRM is bypassed using a loader. The game works, but it’s not how we would usually release a crack.
    Screenshot

    Minimum System Requirements
    • OS: Windows XP/Vista/7
    • Processor: Intel Core 2 Duo @ 2.4 Ghz / AMD Athlon 64 X2 5000+
    • Memory: 2 Gb
    • Video Memory: 512 Mb
    • Video Card: nVidia GeForce 8800 / ATI Radeon HD 2900
    • Sound Card: DirectX Compatible
    • DirectX: 9.0c
    • Keyboard
    • Mouse
    • DVD Rom Drive

    Update Link download (05-05-2013)
    Mirror via PutLocker
    Mirror via UPaFile
    Mirror via Cyberlocker
    Mirrror via BillionUploads
    Mirror via Uploaded, Rapidgator, Turbobit
    Password: koskomputer.blogspot.com
    Installation
    1. Unrar.
    2. Burn or mount the image.
    3. Install the game.
    4. Copy the cracked files from the \Crack directory on the disc to the \Game directory, overwriting the existing exe.
    5. Before you start the game, use your firewall to block all exe files in the game's install directory from going online. Use the game setup before starting as well. It can be found in the following directory:\Game\fifasetup
    6. Play the game. While in game, avoid all of the online options. If you have Origin installed, it may start it up. If that happens, ignore the prompt, play offline, and don't login.
    7. Enjoy!

    Info
    1. PL, UPa, CL, BU Interchangeable Links
    2. Total part: 10 / 700 MB
    3. Total file : 6.4 GB

    1. UL, RG, TB Interchangeable Links
    2. Total part: 7 / 1.00 GB
    3. Total file : 6.4 GB

              Mars: War Logs-COGENT        

    Mars: War Logs - COGENT
    Release Date: 26-04-2013
    Language: English
    Mirrors: PutLocker | UPaFile | Cyberlocker | BillionUploads
    COPYRUGHT : KOS KOMPUTER

    Free download PC game 2013 Mars: War Logs Full Version - In the destroyed world of Mars, two destinies mingle together. Two beings searching for their identity travel across a broken planet, constantly facing bloody political conflicts which tear the old colonies apart. Often divided, sometimes fighting the same enemies, both are the source of the advent of a new era…

    Mars: War Logs is a sci-fi RPG action game that innovatively merges character development and light and rhythmic fights. it takes you on a journey deep into an original futuristic universe and presents you with scenarios dealing with difference, racism and environment.

    Features
    • Take on the role of Roy Temperance, a multi-talented renegade, and surround yourself with companions with real personalities.
    • Choose from the numerous dialog possibilities and influence the destiny of your people.
    • Personalize your fighting style through a dynamic and developed combat system, for entirely different approaches depending on the choices you make.
    • Personalize your development by choosing from dozens of skills and numerous additional perks!
    • Modify and create your own equipment with our craft system.

    Screenshot

    Minimum System Requirements
    • OS: Windows XP/Vista/7/8
    • Processor: Intel Core 2 Duo @ 2.2 Ghz / AMD Athlon 64 X2 4600+
    • Memory: 2 Gb
    • Hard Drive: 3 Gb free
    • Video Memory: 512 Mb
    • Video Card: nVidia GeForce 8800 / ATI Radeon HD 3870
    • Sound Card: DirectX Compatible
    • Network: Broadband Internet Connection
    • DirectX: 9.0c
    • Keyboard
    • Mouse

    Link download
    Mirror via PutLocker
    Mirror via UPaFile
    Mirror via Cyberlocker
    Mirror via BillionUploads
    Password: koskomputer.blogspot.com

    Installation
    1. Unrar
    2. Mount or burn
    3. Install
    4. Copy contents of Crack Directory to install directory
    5. Play the game
    6. Support the software developers. If you like this game, BUY IT!

    Info
    1. PL, UPa, CL, BU Interchangeable Links
    2. Total part: 8 / 350 MB
    3. Total file : 2.54 GB
              Day of the Zombie Repack Version        

    Day of the Zombie Repack Version
    Release Date: 2009
    Language: Russian | English
    Mirrors: PutLocker | UPaFile | BillionUploads
    COPYRIGHT : KOS KOMPUTER

    Free download PC Game Day of the Zombie Full Version - This game made by Groove Games. Basically the exact same thing as Land of the Dead but with a new single player and a few new features.

    story mode is funner and set up better than lotd too, though the story is kinda weak and stupid. you play as 3 different people: a college student looking for his lady, a college janitor trying to save his school (thinks it's all student pranks), and an army soldier (trying to find civilians i guess).

    Screenshot

    Minimum System Requirements
    • Operating system: Windows ® 2000/XP/Vista
    • Processor: Pentium ® 4 or AMD ® Athlon ™ with a frequency of 2.0 GHz or faster.
    • RAM: 256 MB.
    • Free hard disk: 1.5 GB.
    • Video card: compatible with DirectX ® 9.0c (ATI ® Radeon ® 9600 or
    • NVIDIA ® GeForce ® 4 Ti 4600 with 128 MB) or more powerful.
    • Sound card: compatible with DirectX and Windows ®.
    • DirectX ®: DirectX ® version 9.0 or later.
    • CD-ROM: 4x or faster.
    • Mouse and keyboard.

    Link download
    Mirror via PutLocker
    Mirror via UPaFile
    Mirror via BillionUploads
    Password: koskomputer.blogspot.com

    Installation
    1. Unrar
    2. Mount or burn
    3. Install
    4. Play the game

    Note: If you want to change the language, you must find zombie.ini in system folder and change line language=rus to language=int

              Post Apocalyptic Mayhem-PROPHET        

    Post Apocalyptic Mayhem-PROPHET
    Release Date: 04-2014
    Language: English | French | German | Italian | Spanish
     Polish | Russian | Japanese
    Mirrors: PutLocker | UPaFile | Filewinds | BillionUploads
    COPYRIGHT : KOS KOMPUTER

    Free download PC game Post Apocalyptic Mayhem Full Version - lets you race and battle heavily-modified vehicles through numerous breathtaking tracks and lay waste to other racers in over-the-top vehicular mayhem. You can use special vehicle abilities to cause spectacular destruction to enemy cars as you fight and speed to victory. You’ll experience remarkable speeds, hilarious and violent weapons, reinforced vehicles and various exhilarating tracks.

    Note
    This version includes the vehicles: Kitty, Nucloid and the Veteran and tracks: Death Area 8, Airplane Cemetery and Abandoned Sawmill. Also it includes the Chaos Pack DLC. Game version is 1.03.272.

    Screenshot

    Minimum System Requirements
    • OS: Windows XP/Vista/7
    • Processor: Intel Pentium 4 @ 3.0 GHz / AMD Athlon 64 3200+
    • Memory: 1 Gb
    • Hard Drive: 1 Gb free
    • Video Memory: 256 Mb
    • Video Card: nVidia GeForce 6800 / ATI Radeon X1800
    • Sound Card: DirectX Compatible
    • DirectX: 9.0c
    • Keyboard
    • Mouse

    Link download
    Mirror via PutLocker
    Mirror via UPaFile
    Mirror via Filewinds
    Mirror via BillionUploads
    Password: koskomputer.blogspot.com

    Installation
    1. Unpack, burn or mount
    2. Install the game
    3. Copy the cracked content from PROPHET dir
    4. Go To Hell!

    Info
    1. PL, UPa, FW, BU Interchangeable Links
    2. Total part: 4 / 250 MB
    3. Total file : 864 MB
              F1 2012-FLT - UPafile        
    F1 ( Formula 1 ) 2012-FairLight
    Release Date: 18/09/2012
    Mirrors: PutLocker | UPaFile | BillionUploads
    COPYRIGHT : KOS KOMPUTER

    FREE Download PC Game F1 2012 Full Version - Racing presents F1 2012, the next game in the BAFTA-winning series featuring all the official drivers, teams and circuits from the 2012 FIA FORMULA ONE WORLD CHAMPIONSHIP. Learn the basics and master the challenge of driving the best machines on the planet in the Young Driver Test Experience the next generation in weather system technology where storm fronts move across the circuits, soaking specific areas of the track, as well as racing around the all-new Circuit of The Americas in Austin, Texas, home of the 2012 FORMULA 1 UNITED STATES GRAND PRIX. Two new quick fire game options: Season Challenge, a complete Career in just 10 races, and Champions Mode Scenarios, where you test your skills against the very best, complete an exciting line up of gaming options which also includes a 5 year Career, Co-op Championship, 16 Player Multiplayer & Time Attack Scenarios.

    Features
    • Formula One returns to the USA in 2012 at the all new Circuit of the Americas, located in Austin, Texas, and players can drive on the circuit ahead of the track?s debut in November.
    • Gamers will be introduced to the world of Formula One and learn the nuances of how to drive a Formula One car by taking part in the all-new Young Drivers Test at Abu Dhabi?s Yas Marina.
    • Codemasters? Formula One series has set the standard for weather in racing games and players will be able to experience new enhancements that will raise the bar further in F1 2012
    • F1 2012 will feature all-new lap walkthroughs from Formula One test driver and Codemasters technical consultant Anthony Davidson.
    • Expected to attract 120,000 fans on race day, the Circuit of the Americas will be a spectacular addition to the Formula One calendar and will be recreated in full high definition in F1 2012
    Screenshot

    Minimum System Requirements
    • OS: Windows XP/Vista/7
    • Processor: Intel Core 2 Duo @ 2.4 Ghz / AMD Athlon 64 X2 5200+
    • Memory: 2 Gb
    • Hard Drive: 15 Gb free
    • Video Memory: 256 Mb
    • Video Card: nVidia GeForce 8600 / ATI Radeon HD 2600
    • Sound Card: DirectX Compatible
    • DirectX: 9.0c
    • Keyboard
    • Mouse
    Recommended System Requirements
    • OS: Windows Vista/7
    • Processor: Intel Core i7 @ 2.66 GHz / AMD Phenom II X4 @ 3.0 GHz
    • Memory: 4 Gb
    • Hard Drive: 15 Gb free
    • Video Memory: 1 Gb
    • Video Card: nVidia GeForce GTX 560 / ATI Radeon HD 6850
    • Sound Card: DirectX Compatible
    • DirectX: 9.0c
    • Keyboard
    • Mouse
    Update Link download (15-04-2013)
    Mirror via PutLocker
    Mirror via UPaFile
    Mirror via BillionUploads
    Password: koskomputer.blogspot.com
    Installation
    1. Unrar
    2. Mount or Burn it
    3. Install the game
    4. Play

    Info
    1. PL, UPa, BU Interchangeable Link
    2. Total part: 14 / 400 MB
    3. Total file : 5.34 GB / 5.30 GB Compressed

              God Mode-RELOADED        
    God Mode-RELOADED
    Release Date: 04-2013
    Language: English | German | French | Italian | Spanish | Russian
    Mirrors: PutLocker | UPaFile | BillionUploads 
    COPYRIGHT : KOS KOMPUTER

    Free download PC game God Mode Full Version - Is set in a twisted version of Greek mythology and the afterlife. The player is a descendant of an ancient God whose family has been banished by Hades from Mt. Olympus and turned into a mere mortal. To avoid an afterlife of eternal damnnation, the player must battle through this purgatory known as the Maze of Hades against an army of the underworld.

    God Mode combines non-linear gameplay, fast and frantic shooting, hordes of on-screen enemies, and a fully functional online coop mode. Matches rarely ever play out the same, as dozens of in-game modifiers can significantly alter the gameplay on the fly. Characters are fully customizable, both in appearance and equipment, which continually evolve. Gold and experience is constantly accrued and used to unlock new satisfying weaponry and unique powerful abilities, both of which can be further upgraded.

    Screenshot

    Minimum System Requirements
    • OS: Windows XP/Vista/7
    • Processor: Intel Core 2 Duo @ 2.0 Ghz / AMD Athlon 64 X2 4200+
    • Memory: 2 Gb
    • Hard Drive: 5 Gb free
    • Video Memory: 512 Mb
    • Video Card: nVidia GeForce 8800 / ATI Radeon HD 2900
    • Sound Card: DirectX Compatible
    • DirectX: 9.0c
    • Keyboard
    • Mouse

    Recommended System Requirements
    • OS: Windows XP/Vista/7
    • Processor: Intel Core i5 @ 2.4 GHz / AMD Phenom II X4 @ 2.6 GHz
    • Memory: 3 Gb
    • Hard Drive: 5 Gb free
    • Video Memory: 1 Gb
    • Video Card: nVidia GeForce GTX 460 / ATI Radeon HD 5850
    • Sound Card: DirectX Compatible
    • Network: Broadband Internet Connection
    • DirectX: 9.0c
    • Keyboard
    • Mouse

    Link download
    Mirror via PutLocker
    Mirror via UPaFile
    Mirror via BillionUploads
    Password: koskomputer.blogspot.com

    Installation
    1. Unrar.
    2. Burn or mount the image.
    3. Install the game.
    4. Copy over the cracked content from the /Crack directory on the image to your game install directory.
    5. Before you start the game, use your firewall to the game exe file
    from going online.
    6. Play the game, change party settings to LAN.
    7. Support the software developers. If you like this game, BUY IT!

    Info
    1. PL, UPa, BU Interchangeable Links
    2. Total part: 6 / 300 MB
    3. Total file : 1.77 GB
              175008 CamSys S-Color 10K Endoscope Camera Set        
    175008 CamSys S-Color 10K Endoscope Camera Set

    175008 CamSys S-Color 10K Endoscope Camera Set

    For pipes 40 - 150mm, drains, shafts, stacks and other cavities. Docuimentation of photos and videos on SD card with specification of date and time. Controller unit in microprocessor technology with 3.5" TFT-LCD colour display, SD card slot, USB port, integrated Li-ion battery 3.7v, 2.5Ah. In sturdy, impact-proof plastic housing, spray protected.  2m connecting cable from the controller unit to the camera cable set, voltage supply/ charger 100 - 240V, 50 - 60Hz, 13W, SD card 2GB, USB cable, video cable and sturdy case.


              Firing frogdesign        
    Style and aesthetics are personal opinions, but Apple hurt itself when it stopped using frogdesign's Snow White language. Before 1994, Apple had distinctive, even cool looking products, like the Apple IIc and the Macintosh. As part of the cost-cutting measures introduced by Spindler, Apple spent less on cases, and it showed. Suddenly, the Macintosh was no more attractive than any other PC in the world. Am I the only one to see little difference in the aesthetics of post-1993 Macs? I certainly didn't notice either desktop Macs or Powerbooks instantly become more ugly or PC like in 1994. The author picks out the Powerbook 520/540 design as "dated... even before it was released", yet I remember it receiving rave reviews for its ergonomics and aesthetics, both in the Mac press and in PC magazines. For example, Mobile PC Magazine placed it #22 on their list of the 100 best gadgets of all time, here's a quote: "The PowerBook 500 wowed the notebook market with a long string of firsts: The first touch pad; the first stereo speakers (with 16-bit sound); the first expansion bay -- and the first PC Card slot; the first ‘intelligent’ nickel metal hydride battery, with a processor that communicated battery status to the operating system; and, last but not least, the first curvaceous case, with gratuitously swooped edges and corners instead of the boxy angles of previous notebooks. Make no mistake, this notebook set the agenda for the following 10 years of portable computer design." In comparison the Powerbook 100 series were quite chunky looking. The original Powerbook 100 was an innovative laptop, but by the time the 140/170 were released, there was little to distinguish the design from PC laptops that were available at the time. For people who've never seen these computers, you can compare them here and judge for yourself: As for desktop Macs, I'd agree that the Power Mac 4400/7220 was one of Apple's worst products. It was Apple's attempt to create an inexpensive Power Mac by cutting corners and using industry standard components. As bad as it was, it wasn't really representative of Apple's design ability at the time, and its aesthetic design wasn't actually that different to earlier Macs. I doubt many people who aren't Apple historians would notice the difference between the 4400/7220 and a pre-1994 Quadra, again you can judge for yourself:
              RE: I want to buy a mac        
    I'd wait until the Core 2 processors trickle down from the Mac Pros. They're much more powerful then the regular Core processors.
              WWE 13 Wii ISO        




    Release Date: October 30, 2012
    T for Teen: Blood, Crude Humor, Mild Language, Mild Suggestive Themes, Use of Alcohol,Violence
    Genre: Wrestling
    Publisher: THQ
    Developer: Yuke's Media Creations


    Step into the ring of the ’13 edition of the WWE video game. WWE ’13 transforms gameplay through the introduction of WWE Live, completely changing the way players embrace the videogame’s audio and presentation elements. Predator Technology returns to further to further implement critical gameplay improvements while fan favorites in WWE Universe Mode and the franchise’s renowned Creation Suite are poised to offer the utmost in player freedom. Furthermore, WWE ’13 introduces a groundbreaking, single-player campaign based on the highly influential Attitude-Era. Complete with a robot ruster-the largest to date in the franchise–along with a host of additional features, WWE ’13 is ready to live a revolution of its own.  



    Trailer




    Anyway, from the IQ files I managed to grab on this first trip, here are a few of the more interesting signals.


    93.2 BCFM (can be stronger than this on Lansdown, but I'd have to move the aerial around a lot to improve it)
    96.1 BBC Solent
    97.2 Wessex FM
    97.5 Somer Valley FM
    96.6 Frome FM
    98 Ujima (stange effect on some stations causes there to be a trough rather than a peak at the center frequency, not sure why this happens)
    105.8 Wave 105
    105.5 WCR (Warminster Community Radio)
    106.3 Bridge FM



    I went back for a second go last weekend. I think I was up there about 3 hours in total! I was freezing by the end :D I found that different spots on the playing fields gave slightly different results, so had to try and do my best moving around with my flimsy wire dipole stung up in different trees / bushes, or even just being held as I was walking around. I need to build a portable wire yagi I reckon, one I can fold up and stuff in my backpack. I won't bore you with more videos, here's the best I can do going by what's in the IQ files...


    87.5
    87.6
    87.7
    87.8
    87.9
    88.0
    88.1Radio 2
    88.2Radio 2
    88.3Radio 2
    88.4
    88.5Radio 2
    88.6Radio 2
    88.7
    88.8Radio 2
    88.9
    89.0Radio 2
    89.1Radio 2 (possibly)
    89.2
    89.3Radio 2
    89.4
    89.5Radio 2
    89.6
    89.7
    89.8
    89.9Radio 2
    90.0
    90.1Radio 2
    90.2
    90.3Radio 3
    90.4Radio 3
    90.5Radio 3
    90.6
    90.7Radio 3
    90.8Radio 3
    90.9
    91.0Radio 3
    91.1
    91.2Radio 3
    91.3Radio 3
    91.4
    91.5Radio 3
    91.6
    91.7Radio 3
    91.8
    91.9
    92.0
    92.1Radio 3
    92.2
    92.3Radio 3
    92.4
    92.5Radio 4
    92.6Radio 4
    92.7Radio 4
    92.8
    92.9Radio 4
    93.0Radio 4
    93.1
    93.2BCFM / Radio 4 (directional)
    93.3
    93.4Radio 4
    93.5Radio 4
    93.6
    93.7Radio 4
    93.8
    93.9Radio 4
    94.0
    94.1
    94.2
    94.3Radio 4
    94.4
    94.5Radio 4
    94.6
    94.7BBC Hereford and Worcester
    94.8
    94.9BBC Bristol
    95.0BBC Gloucestershire
    95.1
    95.2BBC Oxford
    95.3
    95.4
    95.5BBC Soms
    95.6
    95.7
    95.8BBC Gloucestershire (or Wales, dunno!)
    95.9BBC Wales
    96.0BBC Unid - Too far back in the mush
    96.1BBC Solent
    96.2
    96.3Heart
    96.4Maybe something
    96.5Unid - Too far back in the mush, Could be Wave or Heart
    96.6Frome FM
    96.7Heart
    96.8BBC Cymru
    96.9
    97.0Fantasy Radio (Proabably!)
    97.1Heart
    97.2Heart / Kiss 101 / Wessex FM (Directional)
    97.3
    97.4Capital - Quite hidden by Somer FM
    97.5Somer Valley FM
    97.6
    97.7Radio 1
    97.8Radio 1
    97.9Radio 1
    98.0Ujima
    98.1
    98.2Radio 1
    98.3
    98.4Radio 1
    98.5
    98.6Radio 1
    98.7Radio 1
    98.8
    98.9Radio 1
    99.0
    99.1Radio 1
    99.2
    99.3
    99.4
    99.5Radio 1
    99.6
    99.7Radio 1
    99.8
    99.9
    100.0Classic
    100.1Classic
    100.2Classic
    100.3Classic
    100.4Classic
    100.5
    100.6Classic (slight trace)
    100.7Heart (slight trace)
    100.8Classic
    100.9
    101.0Kiss 101
    101.1
    101.2
    101.3Classic
    101.4Classic
    101.5
    101.6
    101.7Classic
    101.8
    101.9
    102.0Spire FM
    102.1
    102.2Heart
    102.3Heart
    102.4Breeze
    102.5
    102.6Heart
    102.7
    102.8Breeze (slight trace)
    102.9Heart
    103.0Heart
    103.1
    103.2Capital
    103.3
    103.4BBC Devon
    103.5BBC Wiltshire
    103.6BBC Bristol
    103.7
    103.8BBC Solent
    103.9BBC Wales
    104.0
    104.1BBC Berkshire
    104.2
    104.3BBC Wiltshire
    104.4Unid (slight trace)
    104.5
    104.6BBC Bristol
    104.7Unid (slight trace)
    104.8
    104.9BBC Wiltshire
    105.0
    105.1
    105.2Wave 105
    105.3
    105.4Heart
    105.5Swindon 105.5 / WCR (directional)
    105.6Breeze
    105.7Heart
    105.8Wave 105
    105.9Heart
    106.0Sam FM
    106.1Badminton Radio (RSL)
    106.2Sunshine
    106.3Bridge FM
    106.4
    106.5Sam FM
    106.6Breeze
    106.7
    106.8Nation
    106.9
    107.0
    107.1Unid (slight trace)
    107.2Breeze
    107.3
    107.4Breeze
    107.5
    107.6
    107.7Sam FM
    107.8
    107.9Breeze
    108.0


    I have a better SDR device on the way - an SDRPlay - which doesn't suffer so much with overloading and images. If I can get that running from the tablet, it'll make things even easier for bandscanning, but it might be a bit too much try to power it from the USB socket alone.
              Apple gaat vanaf 2006 microprocessors van Intel gebruiken        
    none
              itel Mobile launches 'PowerPro P41' at Rs 5999        

    New Delhi: itel Mobile, part of Chinese mobile manufacturer Transsion Holdings, on Thursday launched its latest smartphone 'PowerPro P41' at Rs 5,999 for the Indian market.

    The highlight of the 4G smartphone is its 5,000mAh battery which claims to offer stand-by time of up to 35 days and talk-time of up to 51 hours.

    Powered by 1.3GHz Quad Core processor, the device comes with 1GB RAM and 8GB internal storage which is expandable up to 32GB.

    "Equipped with state-of-the-art features such as the latest Android 7.0 Nougat and an unmatched long-lasting battery, the 'PowerPro P41' is the perfect combination of high-grade performance and stylish design aimed at delighting Indian customers," said Sudhir Kumar, CEO, itel Mobile India, in a statement.

    The smartphone has 5-inch FWVGA display and sports 5MP rear camera with flash and 2MP front camera. The device provides connectivity options via Wi-Fi, Bluetooth 4.0 and OTG.



              iMac G5. 's Werelds platste desktopcomputer bevat snelle G5-processor        
    none
              Sprout-tastic        
    Sprouts are the best. They're the lazy man's greens and there's so many varieties to get you through the year.

    I'll backtrack. We all know that vegetables are super important in any diet. Particularly raw vegetables. Particularly raw, green, leafy vegetables. But it's not always easy to eat enough of them, especially in winter. Who wants big leafy salads loaded with cooling produce (like lettuce and cucumbers) when it's freezing outside?

    This is why sprouts save the day. Did you know that one cup of lentil sprouts contains 2.5mg of iron? And almost 7 grams of protein? You'd have to eat about 7 cups of raw spinach to get that much nutrition.

    And, sprouts are easy to make, even for the black-thumbed gardeners out there. Soak some seeds for 8-12 hours or overnight, and drain and rinse a couple of times a day until little tails have formed. There's plenty of step by step sprouting guides on the internets so I won't go into detail here. No fancy equipment necessary. Just a jar, seeds, water, and TLC.

    Here's a terrible photo of some sprouts I made today. 



    The tails are about the same length as the seed, and that's when I like to eat them. Here we have a lentil sprout and a mung bean sprout - but of course you can sprout anything, like chickpeas, wheat, almonds, sunflower seeds... Any whole seed, bean, or grain.

    Then what? Well, you can cook them but that will lead to some nutrient losses. Also I personally think they taste better raw. Just throw a hefty handful on top of any dish - soup, salad, stew, or mixed through rice or quinoa. As long as your food isn't piping hot when you mix the sprouts through, you'll retain the nutrients.

    You can also create a side dish using the sprouts. They taste great with a sprinkle of cumin and a squeeze of lemon juice. Or, you can try my version of raw chili. You'll need:
    1 cup of sprouts (I like mung bean sprouts for this)
    A half cup of corn (optional, not everyone handles raw corn well)
    2 large or 3 medium tomatoes 
    Half a red capsicum
    A big handful of fresh herbs like parsley and oregano (dried is ok too)
    A few squeezes of lemon
    A chilli pepper (optional).

    Set aside sprouts and corn in a bowl. Put the tomato, capsicum, herbs, lemon, and chilli into a food processor. This will be the sauce, so make it as smooth or as chunky as you like. Pour the sauce over the sprouts and corn, and stir through. 



    You can eat it just like this, or spoon it into cos lettuce leaves to make "boats". It also tastes good as a topping for a baked potato. 

    You could add cumin and use coriander leaves as the herb, and then put this chili in a burrito or atop nachos. 

    If you're so inclined, you could dice celery and capsicum into this chili and make it even more like its cooked counterpart. 

    Leftovers keep pretty well, although the liquid from the sauce may separate and need to be stirred through. 

    If you're like me and feeling a bit stodgy from all the winter comfort food, try adding some sprouts to your meals. I'm pretty excited about sprouts, and with their help I think I may just make it through another Melbourne winter :)

              Five fun fruit ideas        
    It's Friday! I thought I'd cap off the work week with five fun fruity ideas. Sure, I love unadulterated fruit but sometimes it's nice to make fruit meals extra exciting!

    1. Freeze your fruit.
    And then thaw it until it's just becoming soft, and give it a whirl through the blender or food processor. Result? Fruit ice cream! Dairy-free, nut-free, soy-free, egg-free, raw, and vegan. What's not to love? Bananas and mangoes are a personal favourite, but any creamy fruit will work. For more watery fruits, like berries, use frozen bananas as a base and blend the berries together with the bananas. Strawberries or blueberries are delicious (and you get pink and purple ice cream!). Stir through some berries or chopped fruit for a sundae!

    2. Fruit pops
    Remember icy poles (or Popsicles)? So good in summer but commercial icy poles are just sugar and chemicals. Make your own fruit pops with juice or blended fruits, and Popsicle moulds. You can line the moulds with fruit slices and fill the remainder with juice. Or, try making layered icy poles by filling the moulds halfway with one type of juice, freezing, and filling the remainder with a different juice.

    3. Fruit sauce
    This is simply blending dates with a little water into a "caramel" sauce. You can add fruit to make a sweet sauce. Particularly good with fruits that are slightly tart, like raspberries. The sauce can be used on fruit ice cream or to dress up a fruit salad.

    4. Fruit cake
    I've seen this online several times, but have yet to make it myself. The base of the cake is a large disc of watermelon with the rind removed. The disc can be as large as you like, although it does depend on the size of the watermelon! Place the disc on a serving plate on a flat edge. Then, decorate. This is a great one for the kids to get into. You can cover the watermelon with sliced fruit, or make some fun shapes like stars and hearts. You can also skewer fruit and fruit shapes on toothpicks, and insert them into the top and sides of the cake.

    5. Fruit sushi
    For the wrapping, you'll need a neutral fruit like cucumber or zucchini (yes, both are botanically fruits!) Use a slicer or veggie peeler to make long strips. Arrange slices of fruit at the end of each strip, such as mango, papaya, and pineapple. This is the "filling". You can cut the fruit slices to fit the width of the strip, or just let them overhang - it still looks great! Don't over-fill the roll otherwise it will fall apart. Once you have all the fillings in place, start rolling! You can put some fruit sauce in the roll, or drizzle the sauce over the top.


              Businesses Can Save Money By Shopping For Credit Card Processors        
    none
              Oster Pro Blender 3-in-1 with Food Processor $56.98 (Lowest Price)        

    This post may contain affiliate links. See my disclosure policy to learn more. Remember that pricing on Amazon is subject to change at any time.   Amazon just dropped the price on this Oster Pro Blender 3-in-1 with Food Processor Attachment and XL Personal Blending Cup to $56.98, which is by far the lowest price […]

    The post Oster Pro Blender 3-in-1 with Food Processor $56.98 (Lowest Price) appeared first on Passionate Penny Pincher.


                      
    The Industry’s First GPU-based Enterprise Stream Processing Solution
    Introducing the AMD Stream Processor™ - the first, dedicated, GPU-based solution to address the needs of high-performance computing users. Featuring the AMD R580 GPU, and a 1GB memory buffer, the AMD Stream Processor™ delivers the computing horsepower to handle the most complex compute tasks. The AMD Stream Processor™ delivers this performance in conjunction with AMD's unique compute runtime driver (CTM™), thereby enabling the most direct access to GPU resources in the industry.1
    Next Generation Stream ComputingFeaturing full Shader Model 3.0 support and a scalable ultra-threaded architecture with true 32-bit floating point precision, 48 shader processors, and an ultra-efficient 512-bit ring bus memory controller, AMD Stream Processor™ delivers floating point performance which is an order of magnitude above the CPU's of today.2 Scientists, researchers, and analysts will now be able to benefit from this tremendous increase in computing power to process larger, more complex mathematical models and simulations.
    World Class Performance, Reliability and SupportAMD Stream Processors are continually tested to ensure optimized performance and compatibility on a wide range of todays platforms. AMD offers free customer access to our technical enterprise support team.
    1 AMD Press Release: 13th November 2006: AMD UNLEASHES THE POWER OF STREAM COMPUTING WITH NEW "CLOSE TO METAL" TECHNOLOGY.
              Is the AMD Opteron mainboard project real?        

    Is the AMD Opteron mainboard project real?

    AMD NEWS
    Rumors on the Internet say that AMD (Advanced Micro Devices) is readying a new Opteron Socket F server mainboard. The mainboard is said to be named LGA1207, and have a pinless processor interface. The socket in question reportedly has 1027 pins, using a similar frame for retention like the Intel Socket T LGA775.

    Rumors also say that the new AMD Opteron processor is going to support 667 and 800 MHz memory modules. Also, the support for DDR2-533 will be available, as well as PCI Express. The photo above is said to feature the LGA1207 pinless processor interface of the new Opteron Socket F server mainboard.

    To fill the rumors, it is said that Wal-Mart will sell AMD powered notebooks and desktop PC's at a stunning US$398 price. In a few words, AMD's success in the USA retail market cannot be ignored, but Intel is the first when it comes to all US consumer PC sales. Dell, the number one PC supplier on the US consumer market, uses only Intel processors.
              Intel's Pentium 4 processor        
    Intel just issued a press-release yesterday that covered the re-opening of their high-volume semiconductor manufacturing facility in Chandler, Arizona, converting it to a leading-edge 300-mm, 65-nm process factory. Obviously this new factory will be producing the new 65-nm Pentium 4 processors that go by the codename 'Cedar Mill' and will be introduced in 2006. Cedar Mill Pentium 4s will be introduced in the 600-series of Intel Pentium 4 processors and hence will feature 2MB of L2-cache memory. These same processor cores will also end up in Intel's dual core processors much like todays Pentium Ds are basically two Prescott cores on a single substrate. Future Pentium D processors, codenamed 'Presler' will simply have two Cedar Mill cores which are also physically separated, unlike current Smithfield Pentium Ds.



    Intel's Pentium 4 processor which was first introduced in 2000.

    With the new Presler Pentium Ds Intel will introduce new model numbers as the Preslers will be dubbed the 900-series. And of course there'll be a series of Extreme Edition processors based on Presler. The new XEs will have support for HyperThreading and 1066MHz front side bus unlike the 900-series, effectively making them the fastest processors Intel has to offer. Unfortunately all of these processors and the NetBurst architecture that was introduced in 2000 will be superseded by a new architecture that Intel will launch in late 2006.

    This new architecture will feature all of the good bits of both the Pentium 4 and Pentium M architecture such as the 64-bit extensions, fast front side bus, optimized power consumption and dual core support. The details are a bit sketchy still, but Intel seems dead set on keeping power consumption down to about 35-watts for mobile processors, 65-watts for desktop and 80-watts for server processors. That's a far cry from the up to 130-watts of power consumption some of Intel's current single-core processors are capable of. Mobile processors will go by the Merom codename, whereas desktop processors are said to carry the Conroe codename, no details yet on server processors though.
              The New Athlon Processor - AMD Is Finally Overtaking Intel        

    The New Athlon Processor - AMD Is Finally Overtaking Intel

    Back in October 1998 at the Microprocessor Forum in San Jose, California, the PC-world watched and listened in amazement to Dirk Meyer's first presentation of K7's or now Athlon's architecture. It was quite obvious to experts as well to most other listeners, including Intel employees, that this new AMD processor would mark a new era in the processor world, if AMD could make its promises come true. Now finally, the waiting is over and we can look at a new processor that is indeed living up to all the positive expectations that arose at the end of last year.

    Later on in this article you will find that the AMD Athlon beats the Intel Pentium III in virtually any benchmark we've ran, but before we get into those benchmark numbers, I'd like to take the time and explain why the concept of Athlon is indeed more than 'just another new CPU', but a milestone in the whole processor scene.


              The AMD Athlon XP processor with QuantiSpeed™        
    oday at the Electronic Entertainment Expo (E3), Infinium Labs, Inc. (OTC BB: IFLB) announced that the forthcoming Phantom Gaming Service™ will incorporate the AMD Athlon™XP processor 2500+.

    The AMD Athlon XP processor with QuantiSpeed™ architecture powers an innovative and customer-friendly gaming platform that delivers performance for cutting-edge applications and a powerful gaming experience. The AMD Athlon™ XP processor with QuantiSpeed™ architecture provides stability, compatibility and excellent 32-bit performance

    “As online gaming continues its amazing growth, customers are looking for innovative ways to augment their gaming experience. The Phantom Gaming Service, with systems based on AMD Athlon XP processor 2500+, will allow gamers to play a wide range of games on demand,” said, John Morris, desktop marketing manager at AMD. “With our processors powering the system hardware, the Phantom Gaming Service will be poised to deliver solid game play based on AMD technology - a long-time favorite with the gaming community.”

    The much anticipated Phantom Gaming Service – the first end-to-end, on-demand, subscription-based game distribution service – is slated to launch on Thursday, Nov. 18, 2004 and will give everyone in the family a library of titles available any time, day or night, in the comfort of their home. The service includes the industry's first free game platform hardware – the Phantom Game Receiver™ – in which the AMD Athlon XP processor 2500+ will play a central role

    “It’s critical that we provide our customers with the best game play experience achievable,” said Kevin Bachus, president, Infinium Labs. “By integrating components from top suppliers like AMD, we can ensure that gamers will get the best possible performance out of the service.”

    Infinium Labs is demonstrating the Phantom Gaming Service this week at E3, in Los Angeles, May 12 – 14, 2004, at Booth 746 in the South Hall of the Los Angeles Convention Center

              AMD's quad-core Opteron        

    In a telling sign of just how much the microprocessor industry has changed in the past few years, the GHz race has given way to the current round of n-core races, where n equals some even number of cores. Of course, the dual-core race and its successor, the quad-core race, aren't quite as straightforward as the older clockspeed races, given the complexities inherent in brining new multicore designs to market. It's also the case that the labels "dual-core," "quad-core," and so on are open to some interpretation (I go back and forth on this issue here): do all the cores have to be on a single die, or can they just inhabit the same package?

    The answer to this last question pretty much dictates who wins each leg of the n-core race, with the AMD multicores all sitting on a single die and the Intel multicores debuting with package-level integration before moving to die-level integration. This pattern held for the dual-core race, and it looks like it's going to hold for the quad-core race, as well.

    This past month, Intel stated in a conference call that they'd be bringing the first quad-core parts to market in 4Q06. The quad-core Kentsfield consists of two Core 2 Duo E6700 chips sandwiched together into a single package. This move will bring Intel into the four-cores-per-socket realm well ahead of AMD's planned introduction of the quad-core Opteron. (More on this latter chip in a moment.) Newly leaked roadmaps have Kentsfield debuting at 2.66GHz for $999. That's the same price as the current Core 2 Duo Extreme X6800 part.

    These four-core Kentsfield parts will go head-to-head with AMD's 4x4 systems. I think these two very different system architectures are going to offer a very interesting and stark choice for system builders. With four cores sitting on a single 1066MHz FSB, Kentsfield is going to have much lower per-core memory and FSB bandwidth than the comparable 4x4 system. For its part, the 4x4's two-socket design offers much higher per-core bandwidth that should give it a significant edge in bandwidth-intensive applications.

    Complicating this picture is the fact that Kentsfield's individual cores will outperform the individual Athlon 64 FX cores by a significant margin. So the Kentsfield systems will have more total CPU horsepower than the 4x4 competition, but the CPU will be sipping code and data through a relatively thin straw. (See this post for more on these kinds of bandwidth issues in quad-core systems.)

    My prediction is that when these two types of four-core systems are benchmarked against each other, the results are going to vary with application type to a much higher degree than reviewers have so far been accustomed to. This being the case, I think synthetic and toy benchmarks are going to be increasingly pointless as review tools. It's one thing to use synthetic benchmarks to get CPU horserace numbers for two systems that are very similar, but when you move out of the realm of oranges vs. oranges and into the realm of oranges vs. grapefruit, it becomes less of a horserace and more of a question of which tool best fits the specific types of jobs that you want to do. In this context, real-world application performance is the only thing worth looking at.

    AMD's quad-core Opteron

    http://digitalbattle.com/wp-content/uploads/2006/06/amd_FX.jpg

    Just yesterday, AMD revealed that they won't move to four cores per socket until much later than Intel, in mid-2007. Even then the quad-core parts will start out at the top of the server-oriented Opteron line before trickling down into the desktop space.

    The quad-core Opteron, which just taped out, will arrive later than Kentsfield because it's a more advanced, more integrated design puts four cores on the same piece of silicon. This "later than Intel, but more highly integrated" approach served AMD extremely well in the dual-core race, but I don't think the tactic is going to pay off to quite the same extent in the quad-core realm.

    Intel's first dual-core part was two Prescotts stuck into a single package, but Prescott was a dog. In contrast, two Woodcrests in a single multi-chip module (MCM) format (i.e. the Clovertown Xeons) will offer a ton of horsepower, despite the low level of integration. While I won't make any detailed predictions about the quad-core server horserace, I think it's safe to say that we won't see a quad-core repeat of the kind of blow-out that happened when the dual-core Opterons went up against the MCM-based dual-core Xeons.


              ATI shareholders say yes to AMD        

    ATI shareholders say yes to AMD


    Advanced Micro Devices' AMD proposed acquisition of graphics chip maker ATI Technologies has received the approval of ATI's shareholders, the companies announced Friday.

    AMD intends to buy ATI for $5.4 billion in order to take advantage of ATI's graphics and chipset prowess. The idea is to compete with Intel's ability to present PC companies with a complete product that includes a processor, chipset and graphics technology. Eventually, AMD also plans to integrate graphics technology directly into the processor, it has said.

    AMD DEAL

    Final approval of the deal is required during a court hearing next week, and the transaction is expected to close by the end of October. ATI is based in Markham, Ontario, and the Canadian government also gave its seal of approval to the deal Friday. Dunoo whether AMD would change the presest company name
              AMD NEWS        
    AMD NEWS
    Rumors on the Internet say that AMD (Advanced Micro Devices) is readying a new Opteron Socket F server mainboard. The mainboard is said to be named LGA1207, and have a pinless processor interface. The socket in question reportedly has 1027 pins, using a similar frame for retention like the Intel Socket T LGA775.

    Rumors also say that the new AMD Opteron processor is going to support 667 and 800 MHz memory modules. Also, the support for DDR2-533 will be available, as well as PCI Express. The photo above is said to feature the LGA1207 pinless processor interface of the new Opteron Socket F server mainboard.

    To fill the rumors, it is said that Wal-Mart will sell AMD powered notebooks and desktop PC's at a stunning US$398 price. In a few words, AMD's success in the USA retail market cannot be ignored, but Intel is the first when it comes to all US consumer PC sales. Dell, the number one PC supplier on the US consumer market, uses only Intel processors.
              amd        

    Advanced Micro Devices, Inc. (abbreviated AMD; NYSE: AMD) is an American manufacturer of integrated circuits based in Sunnyvale, California. The company was founded in 1969 by a group of former executives from Fairchild Semiconductor, including Jerry Sanders, III, Ed Turney, John Carey, Sven Simonsen, Jack Gifford and three members from Gifford's team, Frank Botte, Jim Giles and Larry Stenger.

    It is the world's second-largest supplier of x86 based processors, AMD is also the world's second largest supplier of graphics cards and GPUs after taking control over ATI in 2006. AMD also owns a 37% share of Spansion, a supplier of non-volatile flash memory.



    AMD Live!

    AMD Live! logo (TM)
    AMD Live! logo (TM)
    Main article: AMD Live!

    AMD LIVE! was originally the name of Advanced Micro Devices' initiative to gather the support of professional musicians and other media producers behind its hardware products. The primary focus of this initiative was the Opteron server- and workstation-class central processing unit.

    AMD subsequently extended AMD LIVE! into a platform marketing initiative focusing the consumer electronics segment.

    AMD LIVE! was first announced on January 4, 2006 officially through press release.


                      

    Advanced Micro Devices, Inc. (abbreviated AMD; NYSE: AMD) is an American manufacturer of integrated circuits based in Sunnyvale, California. The company was founded in 1969 by a group of former executives from Fairchild Semiconductor, including Jerry Sanders, III, Ed Turney, John Carey, Sven Simonsen, Jack Gifford and three members from Gifford's team, Frank Botte, Jim Giles and Larry Stenger.

    It is the world's second-largest supplier of x86 based processors, AMD is also the world's second largest supplier of graphics cards and GPUs after taking control over ATI in 2006. AMD also owns a 37% share of Spansion, a supplier of non-volatile flash memory.



    AMD Live!

    AMD Live! logo (TM)
    AMD Live! logo (TM)
    Main article: AMD Live!

    AMD LIVE! was originally the name of Advanced Micro Devices' initiative to gather the support of professional musicians and other media producers behind its hardware products. The primary focus of this initiative was the Opteron server- and workstation-class central processing unit.

    AMD subsequently extended AMD LIVE! into a platform marketing initiative focusing the consumer electronics segment.

    AMD LIVE! was first announced on January 4, 2006 officially through press release.

    [edit] AMD Quad FX platform

    Main article: AMD Quad FX platform

    [edit] Geode processors

    Main article: AMD Geode

    In August 2003, AMD also purchased the Geode business (originally the Cyrix MediaGX) from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of 667 MHz and 1 GHz (fanless), and 1.4 GHz (TDP 25W).

    [edit] Pacifica/AMD-V

    AMD's virtualization extension to the 64-bit x86 architecture is named AMD Virtualization (also known by the abbreviation AMD-V), and is sometimes referred to by the code name "Pacifica".

    AMD processors using Socket AM2, Socket S1, and Socket F include AMD Virtualization support. AMD Virtualization is also supported by release two (x2xx series) of the Opteron processors.

    [edit] Production and fabrication

    “ Only real men have fabs. ”

    —Former AMD CEO Jerry Sanders, III, [4]

    AMD produces their own processors in wholly owned semiconductor Fabrication Plants, called "FABs."

    AMD uses a "FAB x" naming convention for their production facilities, where "x" is the number of years that have passed between the founding of AMD and the date the FAB opened.

    At their Fabrication facilities, AMD utilizes a system called Automated Precision Manufacturing (APM). APM is a collection of manufacturing technologies AMD has developed over their history (many of which AMD holds patents for), which are designed to enhance the microprocessor production process, primarily in terms of yield. Much of APM is related to removing the "human equation" from the manufacturing process by isolating in-process wafers in containers that are only exposed to clean room facilities. AMD claims that the technologies that combine to make APM are unique to the industry and make it the foremost semiconductor manufacturer in the world - a fact which is lent some credence by their current agreement with Chartered Semiconductor Manufacturing based in Singapore. India's first Fab City, a silicon chip manufacturing facility, being setup with an investment of $3 billion by the AMD-SemIndia consortium

    AMD currently has a production agreement with foundry Chartered Semiconductor Manufacturing which allows Chartered access to AMD Automated Precision Manufacturing (APM) process technology, in exchange for which Chartered will act as extra production capacity for AMD.

    AMD has planned expansions in their production capacity. In addition to the completion of Fab 36 in Dresden (300 mm 90 nm process SOI), AMD is planning to upgrade Fab 30 (adjacent to Fab 36) in Dresden from 200 mm 90nm process SOI to a 300 mm 65 nm process silicon on insulator|SOI facility and rename it Fab 38, and open a new facility at the Luther Park Technology Campus in Stillwater, New York (likely 300 mm 32 nm process SOI production) between years 2009 to 2010.

    AMD Quad FX platform

    Main article: AMD Quad FX platform

    [edit] Geode processors

    Main article: AMD Geode

    In August 2003, AMD also purchased the Geode business (originally the Cyrix MediaGX) from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of 667 MHz and 1 GHz (fanless), and 1.4 GHz (TDP 25W).


              strangest consoles ever made        

    10. Colecovision Portable

    Yes, this will play all your old school Colecovision games like “Donkey Kong”, “Galaxian”, and “Zaxxon” in all their 16 color glory. Ben Heckendorn, creator of the NES Micro, made a custom case, tore apart an old Colecovision system, designed his own controller, and put it all together into the sleek package you see above. It features A/V outputs, an auxiliary power input, and a reflective black vinyl case with brushed aluminum accents. Unfortunately, this one-of-a-kind system was built by request and has already been sold.

    [Source]

    9. NEStation

    The NEStation is one of the most unique custom systems we’ve ever come across. A French modder painted his NES completely black with blue accents, created a custom vertical stand, installed four blue LEDs, and than carved in a PS2-style logo on its side.

    [Source]

    8. The nPod

    The nPod is Ben Heck’s latest gaming console, featuring a 3.5-inch LCD display, custom machined case (only 41mm thick), and a rear-loading cartridge slot. It’s powered by 4 AA batteries and can play any NES game.

    [Source]

    7. Portable Sega CDX

    Most of you may not remember the CDX, it combined the Sega Genesis and Sega CD into one console. SegaSonicFan’s portable CDX sports a 5″ display, JP/US import switch, second headphone jack, S-Video output, external controller switch, and a built-in automatic scan FM radio. It even plays 32X games.

    6. Gamecube-to-Go

    Gamelver spent a great deal of time constructing this portable Gamecube — especially the case. It looks to feature external controller ports for multiplayer action, along with a pair speakers. Other specifications have not yet been released.

    5. NESPlusSega

    This all-in-one machine can play both Sega Genesis and NES games. The case was made from custom molded ABS plastic and features controller ports for both systems.

    [Source]

    4. Handheld Atari Jaguar

    The Jaguar was the world’s first gaming system with two 32-bit processors. Unfortunately, the system met its demise in early 1996 due to poor sales. Well Dave decided to pay tribute with this portable Jaguar.

    3. Sega Genesis/Mega Drive Mini

    Kotomi took one of those 6-in-1 Sega TV game devices and turned it into a Genesis/Mega Drive mini, complete with cartridge slot. One potential drawback, he doesn’t mention if the cartridge slot is functional — it’s an interesting project none the less. One more picture after the jump.

    2. Dreamcast Portable

    Dave took on an ambitous project when he created this portable Dreamcast from scratch. It features a custom designed case, 5″ LCD display, and a built-in 16MB memory card. Powered by two rechargeable batteries, its good for up to 1 1/2 hours of playtime.


    1) PS2 portable



    http://benheck.com/Games/Sony_projects/PS2p/PS2pMainPic2.jpg


              Tier 2, Payroll Processor - NGA Human Resources - St. John's, NL        
    **The Winner of the SAP Pinnacle Award for Global Customer Satisfaction in 2006, 2008, and 2010 ***Top Employer 2008 - Britain in which we have been recognized...
    From NGA Human Resources - Tue, 08 Aug 2017 18:04:59 GMT - View all St. John's, NL jobs
              Latest 3herosoft iPod Mate for Mac        
    3herosoft iPod Mate for Mac is a super-iPod Manager suite designed specifically for the iPod fans. It contains three powerful tools, 3herosoft iPod Video Converter for Mac, DVD to iPod Converter 3herosoft for Mac, iPod To Computer Transfer for Mac 3herosoft.
    In other similar software, 3herosoft iPod Mate for Mac has many additional features as follows:
    1.Now fully supports the latest iTunes 4.3 and IOS 10.2?
    2.Works with all models of iPod, including iPod classic, iPod nano, iPod 5G, iPod touch, iPhone 3G and iPhone 3G, iPhone 4, iPad.
    3.Transfer music, video, photos, ePub, PDF, voice memo, Podcast, TV files from iPod to Mac and iTunes?
    4.Transfer Roll camera iPhone, Ringtone, SMS, Contacts, Call List from iPhone to Mac for backup?
    5.Transfer iPod playlists in iTunes? Create and edit iPod playlist? The transfer of files among multiple iPods at once;
    6.Convert DVD movie and all popular video formats H.264 and MPEG-4 for playback on iPod and iPhone;
    7.Easily Rip DVD audio and movie music in MP3, AAC, M4A may well play on the iPod and iPhone with high quality.
    8.Convert rip or two or more files simultaneously to save time, even several predefined profiles for different source images simultaneously.
    9.Trim video from the beginning and end set point and cut off from you like. You can also set the start time and duration of the video to a portion of the video output extract.
    10.The intuitive interface and simple few clicks you need to work just a piece of cake, and increase conversion speed by supporting dual-core Intel processor or PowerPC G4/G5.
    11.Fully support the updated Mac OS X Lion.More

              [Updated: It's Gone] Samsung Galaxy Player 4.0 Now Available On Amazon For $229        

    81 v3K-mieL._AA1500_

    UPDATE: ...and it's gone. Did anyone successfully place an order before Amazon pulled the listing?

    If you're the type that would rather have a dedicated MP3 player instead of using your phone for such a task, but still want to show your love and support for Android, then you'll be glad to know that the Samsung Galaxy Player 4 is now officially on sale at Amazon for $229.

    81 v3K-mieL._AA1500_ (1) 71Y8yrt2iXL._AA1500_

    This 4 inch iPod Touch competitor features a 1GHz processor, 8GB of internal storage with SD card slot (expandable up to 32GB), 3.2 MP rear camera, VGA front camera, Wi-Fi b/g/n, GPS, Bluetooth, and Android 2.2 with Market access.

    Read More

    [Updated: It's Gone] Samsung Galaxy Player 4.0 Now Available On Amazon For $229 was written by the awesome team at Android Police.


              G.SKILL develops super-fast RAM        
    G.SKILL develops super-fast RAM


    Ram maker comes up with DDR4-4333MHz 16GB


    RAM maker G.SKILL has released a new DDR4-4333MHz 16GB (2x 8GB).

    The outfit said that it has managed to overclock it to 4500MHz using an Intel Core i5-7600K processor paired with an ASUS ROG Maximus IX Apex motherboard.

    "The latest addition to the Trident Z series of extreme performance memory kit is the DDR4-4333MHz CL19-19-19-39 timing in 16GB (8GBx2) at 1.40V. This is the first DDR4-4333MHz memory kit on the market in the 8GBx2 configuration for a total of 16GB," said G.SKILL.

    The company said that continuing with the pursuit of extreme memory speeds on the latest hardware, G.SKILL has reached an extreme DDR4-4500MHz speed on the Intel Z270 platform, "achieving a stunning bandwidth write speed of 65GB per second in dual channel mode".

    No word on price or release date yet.


              Chief River 2012 platform has fast flash standby        
    intel_logo_new

    Resumes like standby, consumes like Hibernate
    Intel will try to convince more consumers to hop aboard the SSD train in 2012, especially if it's part of 2012 Chief River, Ivy Bridge 22nm based notebook platform.

    This new technology is rather easy to understand as it acts as an improvement to the existing standby technology. Of course, the system has to have a new Core processor of Ivy Bridge order, Panther Point PCH chipset, Windows 7, SSD drive and BIOS that will enable it. The technology allows user to put his machine in standby mode, after which the configured amount of time data moves from DRAM to non-volatile memory, in this case the SSD.

    When a user turns on the PC, information goes back from SSD to DRAM and system resume time is the same as from traditional Standby. The second and the biggest benefit is that despite the fact that it resumes like standby, it consumes like Hibernate.

    Of course this will only work with Chief River compatible notebooks that are coming at some point in 1H 2012, but it looks like a nice feature that will keep your system locked and loaded at all times. It means waking up much faster than from hibernate and all the applications will be ready, just the way they were when you got your system to standby, and all that in just a few seconds.



              Digital Literature : From Text to Hypertext and Beyond        
    a thesis by Raine Koskimaa Today we are living in an increasingly digitalized culture – so much so that it soon may become as ubiquitous as electricity. When that happens, it will be as trivial to speak of digital-whatever, as is at present to speak of electrical culture. The pace and mode of digitalization varies from one cultural sphere to another. All cultural phenomena have their own traditions, conventions, and ways to evolve. There is always friction – cultural habits seldom change over-night, even though technological development may be drastic at certain times. Cultural phenomena are also diverse and heterogeneous and the change may proceed at different speed in different aspects of the phenomenon. This is very much the situation of literature at the moment. In book printing the digital presses have been a part of every day business for some time already. Through word processors a vast majority of literature is written and stored in digital format. We can say that since the 80’s digital processing has been an inseparable part of book production, even though the end product has been, and still mainly is, a printed book. The computer revolution and accompanying software development have given birth to a whole new field of digital texts, which are not bound to the book as a medium. These texts can be read from computer screen, or increasingly, from different reading devices, so called e-books. Digital textuality opens an infinite field to expand literary expression. The difference between print and digital texts can be put simply: print text is static, digital text is dynamic. Digital textuality can be used in many ways in literature. So far the most common way has been to treat digital textuality as an alternative medium for literature – the literature stays the same even though it is published as digital text; it could be published in print as well. There are certain advantages in digital format as such, eg. digital files can be transferred quickly from one place to another, digital texts can be easily updated etc. There is, however, literature which uses digital textuality much more effectively. They integrate aspects of digital dynamics as part of their signifying structure and widen the range of literary expression. Typically this literature cannot be published in print at all. The rise of the so called new media in the wake of digitalization has caused strong media panics, which have had a take on the ponderings about the future of literature too. In most generic forms the questions have been: will book disappear?, will reading die?, will literature vanish? Naturally, there are no simple answers to these questions and answering them is even harder because several different (even though closely interrelated) topics are usually confused. It seems as a safe guess that book as we know it will loose ground to digital texts. This will not, however, be as drastic a change as it may sound to some – literature is not bound to book format. Literature has survived changes from orality to papyrus scrolls; to pergaments; to codex book; there is no reason to believe it would not survive the change for the machines. Literature is inevitably dependent, to some extent, on its medium, but this does not mean that the evolution of literature would be simply following changes in its material basis. The medium sets its limitations, but inside those limits literature has been continuously changing and evolving. The change from print text to digital text doesn’t automatically cause any changes in literature. On the other hand, there seems to be a line of evolution inside literature which tends towards digital textuality without any outside pressure, as a natural next step. Also, digital textuality has caused an opposite evolution, literature which is pointedly committed to the materiality of print book. So, if we take a look at literature today, we can see that there are several things going on simultaneously: traditional print literature is still going strong (according to many indicators, stronger than ever), there is parallel publishing (the same text in print and digital formats), there is literature published in digital format because of technical reasons, there is such ”natively” digital literature which isn't possible in print, and there is literature published as handmade artists' books. Digitalization touches the whole field of literature, directly or indirectly, more or less strongly. Still, this is just the beginning, and the transitory nature of the present situation has resulted in spectacular prophesies and speculations regarding the future of literature. Speculations are important, naturally, as there is no future without visions, but we need also to stop for a while now and then and reflect. And first observations probably are: there is very little of original digital literature existing yet; the old conventions, formed during the five centuries of print literature, direct our expectations of digital literature; the boundaries between literature and non-literature are becoming diffuse. In this study, I have chosen ”hypertext” as the central concept. If we define hypertext as interconnected bits of language (I am stretching Ted Nelson’s original definition quite a lot, but still maintaining its spirit, I believe) we can understand why Nelson sees hypertext ”as the most general form of writing”. There is no inherent connotation to digital in hypertext (the first hypertext system was based on microfilms), but it is the computerized, digital framework – allowing the easy manipulation of both texts and their connections - which gives the most out of it. In addition to the ”simple” hypertexts, there is a whole range of digital texts much more complex and more ”clever”, which cannot be reduced to hypertext, even though they too are based on hypertextuality. Such digital texts as MUDs (Multi User Domains – text based virtual realities) are clearly hypertextual – there are pieces of text describing different environments usually called ”rooms” and the user may wander from room to room as in any hypertext. At the same time, however, there are several other functions available for the user, she may talk with other users, write her own rooms, program objects performing special tasks, or, solve problems and collect game points. Hypertextuality and hypertext theory do not help us much (if at all) in understanding this kind of textual functionality. For that we need cybertext theory. Cybertextuality is – as Espen Aarseth has defined it – a perspective on all texts, a perspective which takes into account and foregrounds the functionality of all texts. From the cybertextual point of view all texts are machines which perform certain functions and which have to be used in a certain way. Also, the reader may be required to perform some functions in order to be able to read the texts, or, she may be allowed to act as an active participant inside the textual world. Cybertextuality, then, is not only about digital texts, but because digital form allows much more freedom to textual functionality, there is much more need for cybertext theory in the field of digital texts than in print text[1]. So, keeping in mind cybertextuality is a perspective on all texts, we can use the term cybertext in a more limited sense to refer to functional digital texts – this means that all digital texts are not necessary cybertexts (plain text files like in the Project Gutenberg archives, or, e-texts in pdf format are no more functional than average print texts). Now we can better define the scope of this study. The theoretical framework is a combination of cybertext theory and more traditional theory of literature. The focus is on hypertext fiction, even though several other text types - digital and non-digital, literary and non literary, fiction and poetry – are also discussed. To deepen the understanding of hypertext fiction and its reading, quite of lot of attention is paid to the evolutionary line of print fiction which seems to be a major influence in the background. That aspect explains the first part of the subtitle, ”From text to hypertext”, with an emphasis on the transitory phase we are witnessing. On the other hand, the approach is open to the latent aspects of the hypertexts discussed, which already refer to the wider cybertextual properties – because of that the ”and Beyond”. In the main title, ”Digital Literature”, literature is used in a narrow (”literary”) sense. The method is inductive in that through scrutinizing individual, concrete exmples, a more general understanding of the field is sought after. Through not trying to include all the possible digital text types in this study I aim to be more analytic than descriptive. This work should be seen as a collection of independent papers – some of them are previously published, some are still waiting for a proper forum. Most of them have started as seminar papers. I have used the opportunity to make some corrections and changes to the articles previously published (mainly to reduce redundancy, or, to add materials cut out from the publications) – thus, the chapters of this study are not identical with published versions. This study is in its fullest form as a web based electronic text – however, if you are reading this study in print format you are not missing anything substantial. The web text includes additional linking, which makes it easier to follow some ”sub-plots” inside the work – themes that reoccur in different contexts. Also, in web version, many of the works discussed are directly linked to the text, and thus, only a click away. In the first chapter of this study I will give a description of the various traditions behind digital literature, of characteristic properties of digital literature, and, the basics of cybertext theory. I consider various hypertext studies belonging as a part to the broader category of cybertext theory. The second chapter, ”Hyperhistory, Cybertheory: From Memex to ergodic literature”, is an overview of cybertext theory, circling around Aarseth’s theory of cybertext and ergodic literature. Various other approaches are discussed, and integrated to the theoretical framework. For understanding cybertext theory, a historical glance to the development of hypertext systems (and ideologies behind them) is necessary. The integration of hyper- and cybertheories is still very much in progress – hopefully this chapter contributes to that integration. In the third chapter ”Replacement and Displacement. At the limits of print fiction”, several novels and stories are scrutinized from the cybertextual perspective. The aim of the chapter is to show the various ways in which print fiction has anticipated hypertextual practices. The fourth chapter, ”Ontolepsis: from violation to central device” focusses on the narrative device which I have dubbed ontolepsis. Ontolepsis covers different kinds of ”leaks” between separate ontological levels (inside fictional universe). Metalepsis, the crossing of levels of embedded narration, is one type of ontolepses, and certainly so far the most studied one. There is a rather lengthy discussion of fictional ontology, and its relation to narrative levels, because these are essential topics in understanding the phenomenon of ontolepsis in all its forms. A science fiction novel, Philip K. Dick’s Ubik, is used as an example, because its multilayered ontology serves perfectly in illustrating the multifarious nature of ontolepsis. In fiction, ontolepses have been seen as violations of certain conventions – the latter part of the chapter discusses how in hypertext fiction ontolepsis has become a central narrative device. In the fifth chapter, ”Visual structuring of hypertext narratives”, three hypertexts, Michael Joyce’s Afternoon, Stuart Moulthrop’s Victory Garden, and Shelley Jackson’s Patchwork Girl, are analyzed stressing their navigation interfaces and use of ”spatial signification”. Narratological questions are also foregrounded. Chapters six and seven, ”Reading Victory Garden – Competing Interpretations and Loose Ends” and ”In Search of Califia” form a pair. They are rather lengthy analyses, or, interpretations, of Stuart Moulthrop’s Victory Garden, and M. D. Coverley’s Califia. In the end of Califia chapter, the question of interpreting hypertexts is discussed. Two forms of interpretative practice, hermeneutics and poetics, seem to have their own roles in regard to hypertexts. The next chapter, ”Negotiating new reading conventions” focusses on reading. In this chapter I’ll look at how traditional reading conventions, on the one hand, still inform hypertext reading, and on the other hand, how hypertexts themselves teach new reading habits, and how new reading formations are negotiated. The final chapter, ”Hypertext Fiction in the Twilight Zone” is a kind of summary. It suggest that fiction based on ”pure” hypertext may be closing its end, and at the same time, looks at the cybertextual means which have appeared to fertilize the field anew. In the horizon there are computer games, virtual realities and other massively programmed forms towering, but also a possibility for a new literature. [1] Which is not to say that there were no use for cybertext theory in the field of print texts – first, there is an amount of experimental or avant garde print texts which take full advantage of functionality potential print book offers; and secondly, there is still much to do to understand the way how literature (even in the most traditional form) works as a technology (see Sukenick (1972) ”The New Tradition”, in In Form: Digressions on the Act of Fiction. Carbondale and Edwardsville: Southern Illinois University Press) – cybertext theory should prove quite fruitful in that field of study. more visit http://users.jyu.fi/~koskimaa/thesis/thesis.shtml

              Chocolate Avocado Pudding with Cocoa Nibs        
    I had a lot of fun at our CFSCC Nutrition Talk! Thank you so much for showing up on a Saturday afternoon excited for information about Paleo nutrition... What a great, motivated and inspiring audience! I've been in the field of nutrition for many years and it is not always something people are eager to respond to! In fact, I usually feel like I'm bearing some sort of "bad news" when discussing it! But not on this day, and I love it!!! I truly hope you all left with something valuable and motivated for a 30-day challenge! It truly is all of you that inspire me to experiment, create and cook! So, thank you!
    *There are still a few Paleo cookbooks available!

    I read all of the nutrition assessment forms, and hands-down, most the food cravings were for: chocolate! (There was one particular craving for pudding, so I figure this recipe takes care of everyone.) This pudding turned out wonderful! Honestly, you do not taste avocado at ALL. It just provides the perfect texture and creaminess. Careful not to use too ripe of a banana, or I think it will overpower the rich chocolate flavor. Be generous with the cocoa nibs! They add an excellent bit o' chocolaty crunch! Hope you enjoy! *This recipe is adapted from: Natural Noshing.com

    Recipe:
    2 avocados
    5 T unsweetened cocoa powder
    1 banana (not overripe or it will be too overpowering)
    1/3 C unsweetened vanilla almond milk
    4 T honey
    1 tsp. vanilla extract
    1/2 tsp. cinnamon
    pinch sea salt
    Garnish with cocoa nibs

    In a food processor or blender, add all ingredients (except cocoa nibs). Blend well for about 1 minute until smooth and creamy. If it's too thick, add a little more almond milk. Spoon into serving dishes and garnish with cocoa nibs. Chill for about 15-30 minutes before serving.

    Hey! Here's a little Carbohydrate talk for you! I've had some questions in the gym lately and realized we kind of skimmed past this during the nutrition talk!
    Hopefully this will answer your questions on Paleo Carbs!

    There are two kinds of Carbohydrates: Simple and Complex
    Simple carbs are the simple sugars like; sucrose (table sugar), fructose (fruit sugar), glucose and lactose (milk sugar).
    On the Paleo diet we would consume these in moderation: fruit, dried fruit and honey (small amounts). Limit fruit to 2 servings and dried fruit to 2 oz. per day if weight loss is a goal!
    Best fruit choices: Berries, berries, berries! Melons, tropical varieties, and citrus.

    Complex carbs for the Paleo diet are: starchy root vegetables. Best choices are: sweet potatoes, parsnips, turnips, butternut squash, acorn squash, spaghetti squash, rutabaga, carrots, beets, onions, kohlrabi, yams and jicama!!! ... That's enough to keep us busy cooking!
    What we do NOT eat for complex carbs are: breads, cereals, bagels, crackers, tortillas, quinoa, rice, cookies, beans, baked goods and all highly processed grain-laden foods!

    Please post thoughts to comments!



              Calculating the average of two numbers using C++        
    Though this site aims at understanding C language, I am writing a program for you in C++, to have a general idea about C++ too.
    A program in C++,to calculate the average of two numbers, which are entered by the keyboard while the execution of the program.
    Try to understand the program;

    #include < iostream >
    using namespace std;
    int main()
    {
        float num1,num2,average,sum;
        cout << "Enter the first number:";
        cin >> num1;
        cout << "Enter the second number:";
        cin >> num2;
        sum=num1+num2;
        average=sum/2;
        cout << "The sum of the two numbers is:"<< sum<<"\n"<<"The average of the two numbers is:" << average;
    return(0);
    }






    Explaination:


    The new file header used here in this C++ program is the #include. By this , we are asking the preprocessor to add the contents of the iostream file to our program.

    The program is written in the main() function , just like in C.

    The variables are defined as float in the same manner.

    Here in C++ we use cout and cin operators for output and input respectively. These are parallel to the printf and scanf in C.
    In the last cout statement the output is cascaded using the cout operator.which is new in the C++ .

    As the main() function is integer type , it returns a zero value which is written as return(0)



              Seminar Topics(100)        
    Excellent Seminar/Paper Presentation Topics for Students


    Put the desired topic name in search bar to get detail search about the topic.

    1. 4G Wireless Systems
    2. A BASIC TOUCH-SENSOR SCREEN SYSTEM
    3. Artificial Eye
    4. Animatronics
    5. Automatic Teller Machine
    6. Aircars
    7. Adding interlligence to ineternet using satellite
    8. ADSL
    9. Aeronautical Communications
    10. Agent oriented programing
    11. Animatronics
    12. Augmented reality
    13. Autonomic Computing
    14. Bicmos technology
    15. BIOCHIPS
    16. Biomagnetism
    17. Biometric technology
    18. BLUE RAY
    19. Boiler Instrumentation
    20. Brain-Computer Interface
    21. Bluetooth Based Smart Sensor Networks
    22. BIBS
    23. CDMA Wireless Data Transmitter
    24. Cellonics Technology
    25. Cellular Positioning
    26. Cruise Control Devices
    27. Crusoe Processor
    28. Cyberterrorism
    29. Code division duplexing
    30. Cellular Digital Packet Data
    31. Computer clothing
    32. Cordect WLL
    33. CARBIN NANO TUBE ELECTRONICS
    34. CARNIVORE AN FBI PACKET SNIFFER
    35. CDMA
    36. CELLONICSTM TECHNOLOGY
    37. CELLULAR NEURAL NETWORKS
    38. CELLULAR DIGITAL PACKET DATA
    39. CIRCUIT AND SAFETY ANALYSIS SYSTEM
    40. CISCO IOS FIREWALL
    41. CLUSTER COMPUTING
    42. COLD FUSION
    43. COMPACT PCI
    44. COMPUTER AIDED PROCESS PLANNING (CAPP)
    45. COMPUTER CLOTHING
    46. COMPUTER MEMORY BASED ON THE PROTEIN BACTERIO
    47. CONCEPTUAL GRAPHICS
    48. CORDECT
    49. CORDECT WLL
    50. CRUISE CONTROL DEVICES
    51. CRUSOE PROCESSOR
    52. CRYOGENIC GRINDING
    53. CRYPTOVIROLOGY
    54. CT SCANNING
    55. CVT
    56. Delay-Tolerant Networks
    57. DEVELOPMENT OF WEARABLE BIOSENSOR
    58. DiffServ-Differentiated Services
    59. DWDM
    60. Digital Audio Broadcasting
    61. Digital Visual Interface
    62. Direct to home television (DTH)
    63. DOUBLE BASE NUMBER SYSTEM
    64. DATA COMPRESSION TECHNIQUES
    65. DELAY-TOLERANT NETWORKS
    66. DENSE WAVELENGTH DIVISION MULTIPLEXING
    67. DESIGN, ANALYSIS, FABRICATION AND TESTING OF A COMPOSITE LEAF SPRING
    68. DEVELOPMENT OF WEARABLE BIOSENSOR
    69. DGI SCENT
    70. DIFFFSERVER
    71. DIGITAL AUDIO BROADCASTING
    72. DIGITAL CONVERGENCE
    73. DIGITAL HUBBUB
    74. DIGITAL SILHOUETTES
    75. DIGITAL THEATRE SYSTEM
    76. DIGITAL WATER MARKING
    77. DIRECT TO HOME
    78. DISKLESS LINUX TERMINAL
    79. DISTRIBUTED FIREWALL
    80. DSL
    81. DTM
    82. DWDM
    83. DYNAMIC LOADABLE MODULES
    84. DYNAMICALLY RECONFIGURABLE COMPUTING
    85. ELECTROMAGNETIC INTERFERENCE
    86. Embedded system in automobiles
    87. Extreme Programming
    88. EDGE
    89. ELECTROMAGNETIC LAUNCHING SYSYEM
    90. E BOMB
    91. E INTELLIGENCE
    92. E PAPER TECHNOLOGY
    93. ELECTRONIC DATA INTERCHANGE
    94. ELECTRONIC NOSE
    95. ELECTRONIC NOSE & ITS APPLICATION
    96. ELECTRONICS MEET ANIMALS BRAIN
    97. EMBEDDED
    98. EMBEDDED DRAM
    99. EMBEDDED LINUX
    100. EMBRYONICS APPROACH TOWARDS INTEGRATED CIRCUITS
              Murray Goulburn workers at Kiewa, Rochester 'devastated', urge governments to save jobs        
    Devastated workers for Australia's largest dairy processor call on both the Victorian and federal governments to help protect the future of their jobs, with the union saying they feel they have been lied to by the company.
              Murray Goulburn: Disappointed dairy farmer ploughs a blunt message into paddock        
    A dairy farmer has ploughed a blunt message to Australia's largest milk processor, the embattled Murray Goulburn, into a 10-acre paddock.
              Murray Goulburn to close factories and shed staff in Tasmania and Victoria        
    Dairy processor Murray Goulburn is to close three factories in response to falling milk supplies, leaving 360 workers in limbo.
              Apple Tablet's 'iPad' in Press Conference        
    Apple's new product 'iPad' announced at Press Conference on 27th January 2010.


    ^Home screen view similar to iPhone


    ^AppStore for iPad





    Processor
    1GHz Apple A4 custom-designed, high-performance and low-power system-on-a-chip

    Display
    9.7-inch (diagonal) LED-backlit glossy widescreen Multi-Touch display with IPS technology
    1024-by-768-pixel resolution at 132 pixels per inch(ppi)
    Fingerprint-resistant oleophobic coating

    Location
    Wi-Fi
    Digital compass
    Assisted GPS (Wi-Fi + 3G model)
    Cellular (Wi-Fi + 3G model)

    Wireless and Cellular
    Wi-Fi model
    Wi-Fi (802.11 a/b/g/n)
    Bluetooth 2.1 + EDR technology
    Wi-Fi + 3G model
    UMTS/HSDPA (850, 1900, 2100 MHz)
    GSM/EDGE (850, 900,1800, 1900 MHz)
    Data only2
    Wi-Fi (802.11 a/b/g/n)
    Bluetooth 2.1 + EDR technology

    Capacity
    16GB, 32GB, or 64GB flash drive

    Size and weight
    Height:
    9.56 inches (242.8 mm)
    Width:
    7.47 inches (189.7 mm)
    Depth:
    0.5 inch (13.4 mm)
    Weight:
    1.5 pounds (.68 kg) Wi-Fi model;
    1.6 pounds (.73 kg) Wi-Fi + 3G model

    Battery and Power
    Built-in 25Whr rechargeable lithium-polymer battery
    Up to 10 hours of surfing the web on Wi-Fi, watching video, or listening to music


    Official Apple's iPad link

              Advanced HLSL II: Shader compound parameters        
    A very short post in the sequel of my previous post "Advanced HLSL using closures and function pointers", there is again a little neat trick by using the "class" keyword in HLSL: It is possible to use a class to regroup a set of parameters (shader resources as well as constant buffers) and their associate methods, into what is called a compound parameter. This feature of the language is absolutely not documented, I discovered the name "compound parameter" while trying to hack this technique, as the HLSL compiler was complaining about a restriction about this "compound parameter". So at least, It seems to be implemented up to the point that it is quite usable. Let's see how we can use this...

    Group of input parameters in shaders, the usual way

    Suppose the following code (not really useful):

    // Shader Resources
    SamplerState PointClamp;

    // First set of parameters
    // -----------------------
    Texture2D<float> DepthBuffer;
    float2 TexelSize;

    // Associated methods with these parameters
    float SampleDepthBuffer(float2 texCoord, int2 offsets = 0)
    {
    return DepthBuffer.SampleLevel(PointClamp, texCoord + offsets * TexelSize, 0.0);
    }

    // Second set of parameters
    // ------------------------
    Texture2D<float> DepthBuffer1;
    float2 TexelSize1;

    // Associated methods with these parameters
    float SampleDepthBuffer1(float2 texCoord, int2 offsets = 0)
    {
    return DepthBuffer1.SampleLevel(PointClamp, texCoord + offsets * TexelSize1, 0.0);
    }

    float4 PSMain(float2 texCoord: TEXCOORD) : SV_TARGET
    {
    return float4(SampleDepthBuffer(texCoord, int2(1, 0)), SampleDepthBuffer1(texCoord, int2(1, 0)), 0, 1);
    }

    What we have is some parameters that are grouped, for example
    • A resource DepthBuffer
    • A TexelSize that gives the size of a texel in uv coordinates for the previous textures (float2(1/width, 1/height))
    • A method "SampleDepthBuffer" that will sample the depth buffer.
    And this set of parameters is duplicate with another set with just the postfix number "1". We need to duplicate the code here. Though of course, as usual there are some workaround
    • Either by using the preprocessor and token pasting: this approach is often used, but It means that you have a code that is sometimes less readable, especially if you have to embed a function in a #define.
    • For the methods SampleDepthBuffer, It could be possible to rewrite the signature to accept a Texture2D as well as a TexelSize as a parameter. Of course, if this function was using more textures, more parameters, we would have to pass them all by parameters...
    The generated code produced by fxc.exe HLSL compiler is like this:
    //
    // Generated by Microsoft (R) HLSL Shader Compiler 9.29.952.3111
    //
    //
    // fxc /Tps_5_0 /EPSMain test.fx
    //
    //
    // Buffer Definitions:
    //
    // cbuffer $Globals
    // {
    //
    // float2 TexelSize; // Offset: 0 Size: 8
    // float2 TexelSize1; // Offset: 8 Size: 8
    //
    // }
    //
    //
    // Resource Bindings:
    //
    // Name Type Format Dim Slot Elements
    // ------------------------------ ---------- ------- ----------- ---- --------
    // PointClamp sampler NA NA 0 1
    // DepthBuffer texture float 2d 0 1
    // DepthBuffer1 texture float 2d 1 1
    // $Globals cbuffer NA NA 0 1
    //
    //
    //
    // Input signature:
    //
    // Name Index Mask Register SysValue Format Used
    // -------------------- ----- ------ -------- -------- ------ ------
    // TEXCOORD 0 xy 0 NONE float xy
    //
    //
    // Output signature:
    //
    // Name Index Mask Register SysValue Format Used
    // -------------------- ----- ------ -------- -------- ------ ------
    // SV_TARGET 0 xyzw 0 TARGET float xyzw
    //
    ps_5_0
    dcl_globalFlags refactoringAllowed
    dcl_constantbuffer cb0[1], immediateIndexed
    dcl_sampler s0, mode_default
    dcl_resource_texture2d (float,float,float,float) t0
    dcl_resource_texture2d (float,float,float,float) t1
    dcl_input_ps linear v0.xy
    dcl_output o0.xyzw
    dcl_temps 1
    mad r0.xyzw, cb0[0].xyzw, l(1.000000, 0.000000, 1.000000, 0.000000), v0.xyxy
    sample_l_indexable(texture2d)(float,float,float,float) r0.x, r0.xyxx, t0.xyzw, s0, l(0.000000)
    sample_l_indexable(texture2d)(float,float,float,float) r0.y, r0.zwzz, t1.yxzw, s0, l(0.000000)
    mov o0.xy, r0.xyxx
    mov o0.zw, l(0,0,0,1.000000)
    ret
    // Approximately 6 instruction slots used

    When we have to deal with lots of parameters that are grouped, and these groups need to be duplicated with their associated methods, It becomes almost impossible to maintain a clean and reusable HLSL code. Fortunately, the "class" keyword is here to the rescue!

    Shader input compound parameters container, the neat way

    Let's rewrite the previous code using the keyword "class":

    SamplerState PointClamp;

    // Declare a container for our set of parameters
    class TextureSet
    {
    Texture2D<float> DepthBuffer;
    float2 TexelSize;

    float SampleDepthBuffer(float2 texCoord, int2 offsets = 0)
    {
    return DepthBuffer.SampleLevel(PointClamp, texCoord + offsets * TexelSize, 0.0);
    }
    };

    // Define two instance of compound parameters
    TextureSet Texture1;
    TextureSet Texture2;

    float4 PSMain2(float2 texCoord: TEXCOORD) : SV_TARGET
    {
    return float4(Texture1.SampleDepthBuffer(texCoord, int2(1, 0)), Texture2.SampleDepthBuffer(texCoord, int2(1, 0)), 0, 1);
    }

    And the resulting compiled HLSL is slightly equivalent:

    //
    // Generated by Microsoft (R) HLSL Shader Compiler 9.29.952.3111
    //
    //
    // fxc /Tps_5_0 /EPSMain2 test.fx
    //
    //
    // Buffer Definitions:
    //
    // cbuffer $Globals
    // {
    //
    // struct TextureSet
    // {
    //
    // float2 TexelSize; // Offset: 0
    //
    // } Texture1; // Offset: 0 Size: 8
    // Texture: t0
    //
    // struct TextureSet
    // {
    //
    // float2 TexelSize; // Offset: 16
    //
    // } Texture2; // Offset: 16 Size: 8
    // Texture: t1
    //
    // }
    //
    //
    // Resource Bindings:
    //
    // Name Type Format Dim Slot Elements
    // ------------------------------ ---------- ------- ----------- ---- --------
    // PointClamp sampler NA NA 0 1
    // Texture1.DepthBuffer texture float 2d 0 1
    // Texture2.DepthBuffer texture float 2d 1 1
    // $Globals cbuffer NA NA 0 1
    //
    //
    //
    // Input signature:
    //
    // Name Index Mask Register SysValue Format Used
    // -------------------- ----- ------ -------- -------- ------ ------
    // TEXCOORD 0 xy 0 NONE float xy
    //
    //
    // Output signature:
    //
    // Name Index Mask Register SysValue Format Used
    // -------------------- ----- ------ -------- -------- ------ ------
    // SV_TARGET 0 xyzw 0 TARGET float xyzw
    //
    ps_5_0
    dcl_globalFlags refactoringAllowed
    dcl_constantbuffer cb0[2], immediateIndexed
    dcl_sampler s0, mode_default
    dcl_resource_texture2d (float,float,float,float) t0
    dcl_resource_texture2d (float,float,float,float) t1
    dcl_input_ps linear v0.xy
    dcl_output o0.xyzw
    dcl_temps 1
    mad r0.xy, cb0[0].xyxx, l(1.000000, 0.000000, 0.000000, 0.000000), v0.xyxx
    sample_l_indexable(texture2d)(float,float,float,float) r0.x, r0.xyxx, t0.xyzw, s0, l(0.000000)
    mov o0.x, r0.x
    mad r0.xy, cb0[1].xyxx, l(1.000000, 0.000000, 0.000000, 0.000000), v0.xyxx
    sample_l_indexable(texture2d)(float,float,float,float) r0.x, r0.xyxx, t1.xyzw, s0, l(0.000000)
    mov o0.y, r0.x
    mov o0.zw, l(0,0,0,1.000000)
    ret
    // Approximately 8 instruction slots used

    There are a couple of things to highlight:
    • The main difference is the packing of constant buffer variable is done separately as they will be packed together - as a struct and aligned on a float4 boundary. So in this specific case, the two floats TexelSize cannot be swizzled/merged (if they were float4, the code would be strictly equivalent). So we need to be aware and careful about this behavior.
    • Input resources are nicely prefixed by their compound parameter name, like "Texture1.DepthBuffer" or "Texture2.DepthBuffer", so it is also really easy to access them when using named resource bindings in an effect. Note that a resource declared but unused inside a compound parameter will occupy a slot register without using it (This is not a big deal, as there is almost the same kind of behavior when using array of resources)
    • We can still enclose "TextureSet Texture1" into a constant buffer declaration, the variable defined inside TextureSet for the Texture1 instance will correctly end-up in the corresponding constant buffer.
    • Global variable are accessible from methods defined in a compound parameter (for example PointClamp SamplerState used by the SampleDepthBuffer method)
    • Compound parameters can only be compiled using SM5.0 (unlike the previous post about the closures).
    This is really a handy feature that could help to better organize some of our shaders. It's always surprising to still discover this kind of syntax constructions accessible from the current HLSL compiler. Let me know if you find any issues using this trick!
              Advanced HLSL using closures and function pointers        
    Shader languages like HLSL, Cg or GLSL are nowadays driving the most powerful processors in the world, but if you are developing with them, you may have been already a little bit frustrated by one of their expressiveness limitations: the common problem of abstraction and code reuse. In order to overcome this problem, solutions so far were mostly using a glue combination of #define/#include preprocessors directives in order to generate combinations of code, permutation of shaders, so called UberShaders. Recently, this problem has been addressed, for HLSL (new in Direct3D11), by providing the concept of Dynamic Linking, and for GLSL, the concept of SubRoutines, For Direct3D11, the new mechanism has been only available for Shader Model 5.0, meaning that even if this could greatly simplified the problem of abstraction, It is unfortunately only available for Direct3D11 class graphics card, which is of course a huge limitation...

    But, here is the good news: While the classic usage of dynamic linking is not really possible from earlier version (like SM4.0 or SM3.0), I have found an interesting hack to bring some kind of closures and functions pointers to HLSL(!). This solution doesn't involve any kind of preprocessing directive and is able to work with SM3.0 and SM4.0, so It might be interesting for folks like me that like to abstract and reuse the code as often as possible! But let's see how It can be achieved...


    A simple problem of abstraction and code reuse in HLSL


    I have been working recently at my work on a GPU implementation of a versatile perlin/simplex/fbm/turbulence noise in HLSL. While some of the individual algorithm are pretty simples, it is often common to use several permutations of those functions in order to produce some nice noise and turbulences functions (like the worm-lava texture I did for Ergon 4k intro). Thus, they are an ideal candidate to demonstrate the use of closures and functions pointers. I won't explain here the basic principle of perlin and fbm noise generation to focus on the problem of code reuse in HLSL.


    Here is a simplified version of a Turbulence Noise implemented in a Pixel Shader:

    float PerlinNoise(float2 pos){
    ....
    }

    float AbsNoise(float2 pos) {
    return abs(PerlinNoise(pos));
    }

    float FBMNoise(float2 pos) {
    float value = 0.0f;
    float frequency = InitialFrequency;
    float amplitude = 1.0f;
    // Classic FBM loop
    for ( int i=0; i < Octaves; i++ )
    {
    float noiseValue = AbsNoise(pos);
    value += amplitude * noiseValue;
    frequency *= Lacunarity;
    amplitude *= Amplitude;
    }
    return value;
    }

    // Turbulence noise:
    // Fbm + Abs + Perlin
    float TurbulenceAbsPerlinNoisePS(float4 pos : SV_POSITION, float2 texPos : TEXCOORD0)
    : SV_Target
    {
    return FBMNoise(texPos);
    }

    The problem with the previous code is that if we want to change the code behind AbsNoise called from FBMNoise (for example, apply cos/sin on the coordinates, or use of a simplex noise instead of the old Perlin Noise), we would have to duplicate the FBMNoise function to call the other function. Of course, we could use the preprocessor to inline the code, but It would end up in something less readable, less debuggable, error prone...etc.

    Another example: Ken Perlin introduced some really cool functions to modify the noise, like the famous marble effect:

    static float stripes(float x, float f) {
    float t = .5 + .5 * sin(f * 2*PI * x);
    return t * t - .5;
    }

    float MarbleNoise(float2 pos) {
    return stripes(pos.x + 2 * FBMNoise(pos), 1.6f);
    }

    But wait! The MarbleNoise function could even be used in place of the AbsNoise function, in order to get another noise effect. So we could have a marble function calling a FBM... but we could also have a marble function called by a FBM... or both...  ugh... so as we can see, It is possible to permute those functions to generate interesting patterns, but unfortunately, the shading language doesn't provide us a way to make those functions pluggable!... Almost! In fact, there is a small breach in the HLSL language and we are going to use it!


    Introduction to Dynamic Linking in HLSL


    So as I said in the introduction, Direct3D11 has introduced the concept of dynamic linking. I suggest the reader to go to an explanation on msdn "Interfaces and classes". Basically, the main feature introduced in the HLSL language is a bit of Object Oriented Programming (OOP) in order to address the problem of abstraction: Now HLSL has the class and interface keyword. But they were mainly introduced for dynamic linking of a shader, and as I said, dynamic linking is only available with SM5.0 profile.


    // An interface describing a light
    interface ILight {
    float3 ComputeAmbient(...);
    float3 ComputeDiffuse(...);
    float3 ComputeSpecular(...);
    };

    // A 1st implem of the ILight interface
    class MyModelLight1 : ILight {
    float3 ComputeAmbient(...) {
    ...
    return color;
    }
    ...
    };

    // A 2ns implem of the ILight interface
    class MyModelLight2 : ILight {
    float3 ComputeAmbient(...) {
    ...
    return color;
    }
    ...
    }

    // The variable through which we are going to access the light model
    ILight abstractLight;

    // We need to declare the two implems in order to get a reference
    // to them from C++ code
    MyModelLight1 modelLight1;
    MyModelLight2 modelLight2;

    float4 PixelShader(PS_INPUT Input ) : SV_Target
    {
    // Call the abstractLight that was previously setup by C++ at
    // PixelShader creation time
    float3 ambient = abstractLight.ComputeAmbient(Input.Pos);
    float3 diffuse = abstractLight.ComputeDiffuse(Input.Pos);
    float3 specular = abstractLight.ComputeSpecular(Input.Pos);

    return float4(saturate( Ambient + Diffuse + Specular ), 1.0);
    }

    To be able to use this shader, we need to setup the abstractLight variable from the C++/C# code, through the usage of ID3D11Device::CreateClassLinkage and in the instatiation of a Pixel Shader ID3D11Device::CreatePixelShader.

    As we can see, we need to declare the interface and classes variable globally, so that they can be accessed by the C++ program. This is the standard way to use dynamic linking in HLSL... but what If we want to use this differently?

    Hacking function pointers in HLSL


    The principle is very simple: Instead of using interface and classes as global variables, we can in fact use them as function parameters and even local variables from method. The way to use it is then straightforward:
    // Base class for a calculator
    interface ICalculator {
    float Compute(...);
    };

    // 1st implem of the calculator
    class ClassicCalculator : ICalculator {
    float Compute(...) {
    ...
    return value;
    }
    };

    // 2nd implem of the calculator
    class ComplexCalculator : ICalculator {
    float Compute(...) {
    ...
    return value;
    }
    };

    // A function using the interface ICalculator
    float MyFunctionUsingICalculator(ICalculator calculator, ...) {
    ...
    value += calculator.Compute(...);
    ...
    return value;
    }

    // A Pixel shader using the ClassicCalculator
    float PixelShader1(PS_INPUT Input ) : SV_Target
    {
    ClassicCalculator classic;
    return MyFunctionUsingICalculator(classic, ...);
    }

    // A Pixel shader using the ComplexCalculator
    float PixelShader2(PS_INPUT Input ) : SV_Target
    {
    ComplexCalculator complex;
    return MyFunctionUsingICalculator(complex, ...);
    }

    The previous example could be compiled flawlessly with ps_4_0 (Shader Model 4) or ps_3_0 (with some minor changes for the pixel shader), and It would compile just fine! So basically, the interface ICalculator is acting as a function pointer, that has two implementations available through the ClassicCalculator and ComplexCalculator classes.  MyFunctionUsingICalculator doesn't have to change its signature to adapt to the underlying function, so as we can see, we have a suitable solution for developing function pointers in HLSL.

    Now, lets try to see if we could use this model to build our flexible noise functions. Replace ICalculator by a INoise interface. We are seeing that an implementation would have to call another INoise interface. In fact, ideally, we would like to code something like this:
    // Base class for a noise function
    interface INoise {
    float Compute(...);
    };

    // Perlin noise implem
    class PerlinNoise : INoise {
    float Compute(...) {
    ...
    return value;
    }
    };

    // FBM noise implem
    class FBMNoise : INoise {
    // Would be ideal to be able to do that
    // We could even make an abstract generic class
    // that could provide a base Source INoise
    // BUT, THIS IS NOT COMPILING!!!
    INoise Source;

    float Compute(...) {
    float value = 0.0f;
    float frequency = InitialFrequency;
    float amplitude = 1.0f;
    // Classic FBM loop
    for ( int i=0; i < Octaves; i++ )
    {
    // Call the source abstract INoise
    float noiseValue = Source.Compute(pos);
    value += amplitude * noiseValue;
    frequency *= Lacunarity;
    amplitude *= Amplitude;
    }
    return value;
    }
    };


    // A Pixel shader using the FBMNoise combined with PerlinNoise
    float PixelShader1(PS_INPUT Input ) : SV_Target
    {
    FBMNoise fbmNoise;
    PerlinNoise perlin;
    // This is not possible, interface variable members are not allowed
    fbmNoise.Source = perlin;
    return fbmNoise.Compute(...);
    }


    Unfortunately, HLSL doesn't permit the use of interface as variable members!. This limitation was quite annoying, as It excludes a whole range of combination, like aggregation, composition... making these function pointers useful only for a very limited set of cases...
    I have tried to overcome this problem using abstract class instead of interface, as classes can be declared as variable members of classes... but, again, there is a huge limitation: The class variable is in fact acting a a final or const variable that cannot be changed, thus making its usage almost useless...
    But I knew that HLSL permits lots of unusual constructions, and this is where closures are going to resolve this.

    Hacking Closures in HLSL


    So we know that interfaces can be used as function pointers, but their usage is limited as we cannot use anykind of composition. An interesting fact is that we can declare local variables in methods as being class or interfaces... The trick is to use a quite uncommon feature of HLSL: It is possible to declare local classes inside a method, that can access local parameters!Therefore, It is possible to use a kind of deferred composition/aggregation using this technique. Let's rewrite our noise functions using this new closure technique:

    1. Declare a INoise interface that is able to compute the noise by using a next INoise implementation.

    // It is possible to compile this code under ps_4_0 and ps_3_0

    // Declare our INoise interface
    interface INoise {
    // Here an interesting hack: We can declare a method that is returning a INoise
    // interface. This method will be implemented by the pixel shaders.
    INoise Next();

    // The compute method of a Noise
    float Compute(float2 pos);
    };

    2. Declare NoiseBase as an abstract implementation of INoise that is implementing the methods. If we had the keyword abstract in hlsl we wouldn't have to implement methods of this class.

    // We are creating an abstract class from INoise in order
    // to implement both methods
    class NoiseBase : INoise {
    INoise Next() {
    // This code will never be used. It is only
    // used to declare this class
    NoiseBase base;
    return base;
    }

    float Compute(float2 pos) {
    // This code will never be used. It is only
    // used to declare this class
    return Next().Compute(pos);
    }
    };

    3. Use NoiseBase to implement final INoise functions. If you look at AbsNoise, FbmNoise or MarbleNoise, they are using the INoise::Next() method to get an instance of the INoise interface they rely on. This is where functions pointers are extremely useful here.

    // PerlinNoise implem
    class PerlinNoise : NoiseBase {
    float Compute(float2 pos) {
    // call a standard perlin_noise implemented as a simple external function
    return perlin_noise(pos);
    }
    };

    // AbsNoise implem
    class AbsNoise : NoiseBase {
    float Compute(float2 pos) {
    // Note: We are using Next to access the next underlying function pointer
    return abs(Next().Compute(pos));
    }
    };

    // FbmNoise implem
    class FbmNoise : NoiseBase {
    float Compute(float2 pos) {
    float value = 0.0f;
    float amplitude = 1.0f;
    float frequency = InitialFrequency;
    for ( int i=0; i < Octaves; i++ )
    {
    float noiseValue = Next().Compute(pos);
    value += amplitude * noiseValue;
    frequency *= Lacunarity;
    amplitude *= Amplitude;
    }
    return value;
    }
    };

    // MarbleNoise implem
    class MarbleNoise : NoiseBase {
    float Compute(float2 pos) {
    return stripes(2 * Next().Compute(pos, frequency), 1.6f);
    }

    static float stripes(float x, float f) {
    float t = .5 + .5 * sin(f * 2*PI * x);
    return t * t - .5;
    }
    };

    4. Implements the pixel shaders with the closure mechanism. We are declaring local classes that will override INoise::Next() method in order to chain INoise function pointers together.

    // Fbm -> PerlinNoise
    float FbmPerlinNoise2DPS( float4 pos : SV_POSITION, float2 texPos : TEXCOORD0 )
    : SV_Target
    {
    // Look! We are declaring a local class
    class Noise1 : PerlinNoise {} noise1;
    // and this local classs can access local variable!
    // For example, Noise2 can access previous noise1 variable.
    class Noise2 : FbmNoise { INoise Next() { return noise1; } } noise2;

    // Allowing us to cascade the calls and making a kind of deferred composition.
    return noise2.Compute(texPos);
    }

    // Fbm -> Abs -> PerlinNoise
    float FbmAbsPerlinNoise2DPS( float4 pos : SV_POSITION, float2 texPos : TEXCOORD0 )
    : SV_Target
    {
    class Noise1 : PerlinNoise {} noise1;
    class Noise2 : AbsNoise { INoise Next() { return noise1; } } noise2;
    class Noise3 : FbmNoise { INoise Next() { return noise2; } } noise3;

    // FbmNoise is calling indirectly AbsNoise that will call PerlinNoise.
    return noise3.Compute(texPos);
    }

    // Marble -> Fbm -> Abs -> PerlinNoise
    float FbmAbsPerlinNoise2DPS( float4 pos : SV_POSITION, float2 texPos : TEXCOORD0 )
    : SV_Target
    {
    class Noise1 : PerlinNoise {} noise1;
    class Noise2 : AbsNoise { INoise Next() { return noise1; } } noise2;
    class Noise3 : FbmNoise { INoise Next() { return noise2; } } noise3;
    class Noise4 : MarbleNoise { INoise Next() { return noise3; } } noise4;

    // MarbleNoise is calling FbmNoise that is calling indirectly AbsNoise
    // that will call PerlinNoise.
    return noise4.Compute(texPos);
    }


    // Fbm -> Marble -> Abs -> PerlinNoise
    float FbmAbsPerlinNoise2DPS( float4 pos : SV_POSITION, float2 texPos : TEXCOORD0 )
    : SV_Target
    {
    class Noise1 : PerlinNoise {} noise1;
    class Noise2 : AbsNoise { INoise Next() { return noise1; } } noise2;
    class Noise3 : MarbleNoise { INoise Next() { return noise2; } } noise3;
    class Noise4 : FbmNoise { INoise Next() { return noise3; } } noise4;

    // FbmNoise is calling MarbleNoise that is calling indirectly AbsNoise
    // that will call PerlinNoise.
    return noise4.Compute(texPos);
    }

    Et voila! As you can see, we are able to declare local classes from a pixel shader that are acting as closures. It is for example even possible to declare local classes that have a specific code in their Compute() methods.
    Behind the scene, when chaining the INoise::Next() methods, the fxc HLSL compiler is seeing all thoses classes as "INoise*".
    It is then possible to perform a fbm(marble(abs(perlin_noise()))) as well as a marble(fbm(abs(perlin_noise()))).

    In the end, It is effectively possible to implement closures in HLSL that can be used in SM4.0 as well as SM3.0!

    Improving closures chaining


    From the previous example, we can extend the concept by
    1. Adding static local constructors to each Noise function :
    // PerlinNoise implem
    class PerlinNoise : NoiseBase {
    float Compute(float2 pos) {
    // call a standard perlin_noise implemented as a simple external function
    return perlin_noise(pos);
    }
    // Add local "constructor"
    static INoise New() {
    PerlinNoise noise;
    return noise;
    }
    };

    // AbsNoise implem
    class AbsNoise : NoiseBase {
    float Compute(float2 pos) {
    // Note: We are using Next to access the next underlying function pointer
    return abs(Next().Compute(pos));
    }
    // Add local constructor and chain with From INoise
    static INoise New(INoise from) {
    class LocalNoise : AbsNoise { INoise Next() { return from; } } noise;
    return noise;
    }
    };

    // Add the same constructors to FbmNoise and MarbleNoise.
    // ....
    2. And then we can rewrite the Pixel shader functions to chain operators in a shorter form:
    // Fbm -> Marble -> Abs -> PerlinNoise
    float FbmAbsPerlinNoise2DPS( float4 pos : SV_POSITION, float2 texPos : TEXCOORD0 )
    : SV_Target
    {
    // FbmNoise is calling MarbleNoise that is calling indirectly AbsNoise
    // that will call PerlinNoise.
    return FbmNoise::New(MarbleNoise::New(AbsNoise::New(PerlinNoise::New()))).Compute(texPos);
    }

    This way, It allows a syntax that is even more concise and modular!

    Further Considerations


    This is a very exciting technique that could open lots of abstraction opportunities while developing in HLSL. Though, in order to use this technique, there are a couple of advantages and things to take into account:
    • An interface cannot inherit from another interface (that would be really interesting)
    • An interface can only have method members.
    • A class can inherit from another class and from several interfaces.
    • Unlike in C/C++, we cannot pre-declare an interface, but we can use a declaration being declared (See the example of the method INoise::Next, returning a INoise).
    • The compiler has a limitation against the reuse of an implementation in a call chain and will complain about a recursive call (even if there is no recursive call at all): For example, It is not possible to reuse twice the sample type of class closure in a call chain, meaning that it is not possible to make a call chain like this one: Marble => FBM => Marble => Abs => Perlin. The fxc compiler would complain about the second "Marble" as It would see it as a kind of recursive call. In order to reuse a function, we need to duplicate it, that's probably the only point that is annoying here.
    • Generated compiled asm output from closures are exactly the same as using standard inlining methods.
    • Before going to local class-closure, I have tried several techniques that were sometimes crashing fxc compiler.
    • Thus, as it is a way of hacking the usage HLSL, It is not guarantee that this will be supported in the future. But at least, if it is working for SM5.0, SM4.0 and 3.0, we can expect that we are safe for a while!
    • Also, the compilation time under vs_3_0/ps_3_0 profile seems to take more time, not sure if its the language construction or a regular behavior of 3.0 profiles.
    Let me know if you are able to use this technique and If you are finding other interesting constructions or problems. That would be very interesting to dig a little more into the opportunities it opens. Lastly, I have done a small google search about this kind of technique, but didn't found anything... but It could have been used already by someone else, thus this whole technique is a new hypothetical discovery, but I enjoyed a lot to discover it!

              Direct3D11 multithreading micro-benchmark in C# with SharpDX        
    Multi-threading is an important feature added to Direct3D11 two years ago and has been increasingly used on recent game engine in order to achieve better performance on PC. You can have a look at "DirectX 11 Rendering in Battlefield 3" from Johan Anderson/DICE which gives a great insight about how it was effectively used in practice in their game engine. Usage of the Direct3D11 multithreading API is pretty straightforward, and while we are also using it successfully at our work in our R&D 3D Engine, I didn't take the time to sit down with this feature and check how to get the best of it.

    I recently came across a question on the gamedev forum about "[DX11] Command Lists on a Single Threaded Renderer": If command lists are an efficient way to store replayable drawing commands, would it be efficient to use them even in a single threaded scenario where lots of drawing commands are repeatable?

    In order to verify this, among other things, I did a simple micro-benchmark using C#/SharpDX, but while the results are somehow expectable, there are a couple of gotchas that deserve a more in-depth look...


    Direct3D11 Multi-threading : The basics


    I assume that general multi-threading concepts and advantages are already understood to focus on Direct3D11 multi-threading API.

    There is already a nice "Introduction to Multithreading in Direct3D11" on msdn that is worth reading if you are already a little bit familiar with the Direct3D11 API.

    In Direct3D10, we had only a class ID3D10Device to perform object/resource creation and draw calls, the API was not thread safe, but It was possible to emulate some kind of deferred rendering by using mutexes and a simplified command buffers to access safely the device.

    In Direct3D11, preparation of the draw calls are now "parralelizable" while object/resource creation is thread safe. The API is now split between:
    • ID3D11Device which is responsible to create object/resources/shaders and device contexts.
    • ID3D11DeviceContext which holds all commands to setup shaders pipeline and perform all draw calls (including constant buffer update, setup of shader resource views, samplers, blendstate...etc.)

    When a Direct3D11 device is created, it provides a default ID3D11DeviceContext called an immediate context that is effectively used for immediate rendering. There is only one immediate context available per device.

    In order to use deferred rendering, we need to create new ID3D11DeviceContext called deferred context. One context for each thread responsible for preparing a set of draw calls.

    Then the sequence of multithreaded draw calls are executed like this:
    Each secondary threads are responsible to prepare draw calls in a set of ID3D11CommandList that will effectively be executed by the immediate context (in order to push them to the driver).

    The simplified version of the code to write is fairly easy:

    // Thread-1
    context[threadIdn].InputAssembler.InputLayout = layout1;
    context[threadIdn].InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
    context[threadIdn].InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertices1, Utilities.SizeOf<Vector4>() * 2, 0));
    [...]
    context[threadId1].Draw(...)
    commandLists[threadId1] = context[ThreadId1].FinishCommandList(false);
    [...]
    // Thread-n
    context[threadIdn].InputAssembler.InputLayout = layoutn;
    context[threadIdn].InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
    context[threadIdn].InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(verticesn, Utilities.SizeOf<Vector4>() * 2, 0));
    [...]
    context[threadIdn].Draw(...)
    commandLists[threadIdn] = context[ThreadIdn].FinishCommandList(false);

    // Rendering Thread
    for (int i = 0; i < threadCount; i++)
    {
    var commandList = commandLists[i];
    // Execute the deferred command list on the immediate context
    immediateContext.ExecuteCommandList(commandList, false);
    commandList.Dispose();
    }

    The API provides several key advantages:
    • We can easily switch the code between immediate context and deferred context. Thus using the multi-threading part of the Direct3D11 API doesn't hurt our code.
    • The API is supported on downlevel hardware (from Direct3D11 down to Direct3D9)
    • The underlying driver can take advantages when calling FinishCommandList to perform some native layout that will help the deferred ExecuteCommandList command to run faster.
    About the "native support from driver", It can be checked by using CheckFeatureSupport (or directly in SharpDX using CheckThreadingSupport) but it seems that almost only NVIDIA (and quite recently, around this year), is supporting this feature natively. On my previous ATI 6850 and now on my 6900M are not supporting it. Is this bad? We will see that the default Direct3D11 runtime is performing just fine for this, but doesn't provide any extra boost.

    We will also see that there is an interesting issue with the usage of Map/Unmap or UpdateSubresource in order to update constant buffers, and their respective usage under a multithreading scenario could hurt performances.

    MultiCube, a Direct3D11 Multi-threading micro-benchmark


    In order to stress-test multi-threading using Direct3D11, I have developed a simple application called MultiCube (available as part of SharpDX samples: See Program.cs)


    This application is performing the following benchmark: It renders n x n cubes on the screen, each cube has its own matrix rotation. You can modify the number of cubes from 1 (1x1) to 65536 (256x256). The title bar is including some benchmark measurement (FPS/ time per frame) and you can change the behavior of the application with following keys:
    • F1: Switch between Immediate Test (no threading), Deferred Test (Threading), and Frozen-Deferred Test (execute a pre-prepared CommandList on the ImmediateContext)
    • F2: Switch between Map/Unmap mode and UpdateSubresource mode to update constant buffers.
    • F3: Burn the CPU on/off. This is were multithreading usage is making the difference and we are going to analyse the results a little bit more. When this option is on, It simulates lots of CPU calculation on the deferred threads. If this is off, It will just batch the draw calls (which are simple, its just Cubes!)
    • Left-Right arrows: Decrease/Increase the number of cubes to display (default 64x64)
    • Down-Up arrows: Decrease/Increase the number of threads used (only for Deferred Test mode)
    When the deffered mode is selected, each threads are rendering a set of rows in batch. If you have for example 100x100 cubes to render, and 5 threads, each thread will draw 20x100 cubes.

    If your graphics driver doesn't support  natively multithreading, you will see a "*" just after Deferred node.

    You can download the application here. It is a single exe that doesn't need anykind of install (apart the DirectX June 2010 runtime). Also, being able to pack this application into a single exe is a unique feature of SharpDX: static linking of a .NET exe with SharpDX Dlls.


    Results


    I ran 2 type of tests:
    1. Draw 65536 cubes with the Burn-Cpu option ON and OFF, and comparing Immediate and Deferred rendering (ranging from 1 thread to 6 threads).
    2. Draw 1024 cubes switching between Map/Unmap and UpdateSubresource, and comparing the results between Immediate and Deferred rendering.
    Two machines with the same main processor Intel i7-2600K, 8Go RAM were used, one with NVIDIA GTX 570 and the other one with a ATI 6900M graphics card.


    65536 Drawcalls - BurnCpu: On Threads
    Type 1 2 3 4 6
    Nvidia-GTX 570 Deferred 232ms 130ms 98ms 92ms 82ms
    Nvidia-GTX 570 Immediate 220ms 220ms 220ms 220ms 220ms
    ATI 6900M Deferred 231ms 131ms 98ms 93ms 84ms
    ATI 6900M Immediate 228ms 228ms 228ms 228ms 228ms

    Fig2. 65536 draw calls with CPU intensive threads, comparison between Immediate and Deferred rendering


    65536 Drawcalls - BurnCpu: Off Threads
    Type 1 2 3 4 6
    Nvidia-GTX 570 Deferred 31ms 24ms 21ms 20ms 20ms
    Nvidia-GTX 570 Immediate 19ms 19ms 19ms 19ms 19ms
    ATI 6900M Deferred 32ms 28ms 28ms 28ms 28ms
    ATI 6900M Immediate 28ms 28ms 28ms 28ms 28ms

    Fig2. 65536 draw calls with CPU ligh threads, comparison between Immediate and Deferred rendering

    And finally the Map/Unmap and UpdateSubresource test:

    65536 Drawcalls - Type Map Update
    Nvidia-GTX 570  Immediate - 1024 0.6ms 1.1ms
    Nvidia-GTX 570  Deferred - 1024 0.92ms 7.32ms
    ATI 6900M Immediate - 1024 0.6ms 0.6ms
    ATI 6900M Deferred - 1024 0.6ms 0.6ms


    Analysis


    If we examine the results a little more carefully, there are a couple of interesting things to highlight:

    • Using multithreading and deferred context rendering is only relevant when the CPU is effectively used on each threads (that sounds obvious, but It is at least clear!). When we are not using the CPU, Immediate rendering is in fact faster!
    • Multithreading rendering with CPU intensive application can perform 3-4x times faster than a single threaded application (at the condition that we have enough CPU core to dispatch rendering jobs)
    • The "native support from driver" of Direct3D11 multithreading doesn't seem to change so much, compare to the NVIDIA graphics card that is supporting it, we don't see a huge difference with AMD.
    • Usage of UpdateSubresource on a NVIDIA card is 8x times slower in a multithreading situation and is hurting a lot the performance of the application: Use Map/Unmap instead!
    Of course, as usual, this is a synthetic, micro-benchmark test that should be taken with caution and can not reflect every test cases, so you need to perform your own benchmark if you have to make the decision of using multithreading rendering!

    Finally, to respond to the original gamedev question, I provided a "Frozen Deferred" test in MultiCube to test if rendering a pre-prepared CommandList is actually faster then executing it with an immediate context: It seems that It doesn't make currently any differences (but for this to be sure, I would have to run this benchmark on several different machines/CPU/graphics card/drivers configs in order to fully verify it).
              SharpDX, a new managed .Net DirectX API available        
    If you have followed my previous work on a new .NET API for Direct3D 11,  I proposed SlimDX team this solution for the v2 of their framework, joined their team around one month ago, and I was actively working to widen the coverage of the DirectX API. I have been able to extend the API coverage almost up to the whole API, being able to develop Direct2D samples, as well as XAudio2 and XAPO samples using it. But due to some incompatible directions that the SlimDX team wanted to follow, I have decided to release also my work under a separate project called SharpDX. Now, you may wonder why I'm releasing this new API under a separate project from SlimDX?

    Well, I have been working really hard on this from the beginning of September, and I explained why in my previous post about Direct3D 11. I have checked-in lots of code under the v2 branch on SlimDX, while having lots of discussion with the team (mostly Josh which is mostly responsible for v2) on their devel mailing list. The reason I'm leaving SlimDX team is that It was in fact not clear for me that I was not enrolled as part of the decision for the v2 directions, although  I was bringing a whole solution (by "whole", I mean a large proof of concept, not something robust, finished). At some point, Josh told me that Promit, Mike and himself, co-founders of SlimDX, were the technical leaders of this project and they would have the last word on the direction as well as for decisions on the v2 API.

    Unfortunately, I was not expecting to work in such terms with them, considering that I had already made 100% of the whole engineering prototype for the next API. From the last few days, we had lots of -small- technical discussions, but for some of them, I clearly didn't agree about the decisions that were taken, whatever the arguments I was trying to give to them. This is a bit of disappointment for me, but well, that's life of open source projects. This is their project and they have other plans for it. So, I have decided to release the project on my own with SharpDX although you will see that the code is also currently exactly the same on the v2 branch of SlimDX (of course, because until yesterday, I was working on the SlimDX v2 branch).

    But things are going to change for both projects : SlimDX is taking the robust way (for which I agree) but with some decisions that I don't agree (in terms of implementation and direction). Although, as It may sound weird, SharpDX is not intended to compete with SlimDX v2 : They have clearly a different scope (supporting for example Direct3D 9, which I don't really care in fact), different target and also different view on exposing the API and a large existing community already on SlimDX. So SharpDX is primarily  intended for my own work on demomaking. Nothing more. I'm releasing it, because SlimDX v2 is not going to be available soon, even for an alpha version. On my side, I'm considering that the current state (although far to be as clean as It should be) of the SharpDX API is usable and I'm going to use it on my own, while improving the generator and parser, to make the code safer and more robust.

    So, I did lots of work to bring new API into this system, including :
    • Direct3D 10
    • Direct3D 10.1
    • Direct3D 11
    • Direct2D 1
    • DirectWrite
    • DXGI
    • DXGI 1.1
    • D3DCompiler
    • DirectSound
    • XAudio2
    • XAPO
    And I have been working also on some nice samples, for example using Direct2D and Direct3D 10, including the usage of the tessellate Direct2D API, in order to see how well It works compared to the gluTessellation methods that are most commonly used. You will find that the code is extremely simple in SharpDX to do such a thing :
    using System;
    using System.Drawing;
    using SharpDX.Direct2D1;
    using SharpDX.Samples;

    namespace TessellateApp
    {
    ///
    /// Direct2D1 Tessellate Demo.
    ///

    public class Program : Direct2D1DemoApp, TessellationSink
    {
    EllipseGeometry Ellipse { get; set; }
    PathGeometry TesselatedGeometry{ get; set; }
    GeometrySink GeometrySink { get; set; }

    protected override void Initialize(DemoConfiguration demoConfiguration)
    {
    base.Initialize(demoConfiguration);

    // Create an ellipse
    Ellipse = new EllipseGeometry(Factory2D,
    new Ellipse(new PointF(demoConfiguration.Width/2, demoConfiguration.Height/2), demoConfiguration.Width/2 - 100,
    demoConfiguration.Height/2 - 100));

    // Populate a PathGeometry from Ellipse tessellation
    TesselatedGeometry = new PathGeometry(Factory2D);
    GeometrySink = TesselatedGeometry.Open();
    // Force RoundLineJoin otherwise the tesselated looks buggy at line joins
    GeometrySink.SetSegmentFlags(PathSegment.ForceRoundLineJoin);

    // Tesselate the ellipse to our TessellationSink
    Ellipse.Tessellate(1, this);

    // Close the GeometrySink
    GeometrySink.Close();
    }


    protected override void Draw(DemoTime time)
    {
    base.Draw(time);

    // Draw the TextLayout
    RenderTarget2D.DrawGeometry(TesselatedGeometry, SceneColorBrush, 1, null);
    }

    void TessellationSink.AddTriangles(Triangle[] triangles)
    {
    // Add Tessellated triangles to the opened GeometrySink
    foreach (var triangle in triangles)
    {
    GeometrySink.BeginFigure(triangle.Point1, FigureBegin.Filled);
    GeometrySink.AddLine(triangle.Point2);
    GeometrySink.AddLine(triangle.Point3);
    GeometrySink.EndFigure(FigureEnd.Closed);
    }
    }

    void TessellationSink.Close()
    {
    }

    [STAThread]
    static void Main(string[] args)
    {
    Program program = new Program();
    program.Run(new DemoConfiguration("SharpDX Direct2D1 Tessellate Demo"));
    }
    }
    }

    This simple example is producing the following ouput :


    which is pretty cool, considering the amount of code (although the Direct3D 10 and D2D initialization part would give a larger code), I found this to be much simpler than the gluTessellation API.

    You will find also some other samples, like the XAudio2 ones, generating a synthesized sound with the usage of the reverb, and even some custom XAPO sound processors!

    You can grab those samples on SharpDX code repository (there is a SharpDXBinAndSamples.zip with a working solutions with all the samples I have been developing so far, with also MiniTris sample from SlimDX).
              Implementing an unmanaged C++ interface callback in C#/.Net        
    Ever wanted to implement a C++ interface callback in a managed C# application? Well, although that's not so hard, this is a solution that you will probably hardly find over the Internet... the most common answer you will get is that it's not possible to do it or you should use C++/CLI in order to achieve it...  In fact, in C#, you can only implement a C function delegate through the use of Marshal.GetFunctionPointerForDelegate but you won't find anything like Marshal.GetInterfacePointerFromInterface. You may wonder why do I need such a thing?

    In my previous post about implementing a new DirectX fully managed API, I forgot to mention the case of interfaces callbacks. There are not so many cases in Direct3D 11 API where you need to implement a callback. You will more likely find more use-cases in audio APIs like XAudio2, but in Direct3D 11, afaik, you will only find 3 interfaces that are used for callback:
    • ID3DInclude which is used by D3DCompiler API in order to provide a callback for includes while using preprocessor or compiler API (see for example D3DCompile).
    • ID3DX11DataLoader and ID3DX11DataProcessor, which are used by some D3DX functions in order to perform asynchronous loading/processing of texture resources. The nice thing about C# is that those interfaces are useless, as it is much easier and trivial to directly implement them in C# instead
    So I'm going to take the example of ID3DInclude, and how It has been successfully implemented for the SharpDX.

    Memory layout of a C++ object implementing pure virtual methods


    If you know how a C++ interface with pure methods is layout in memory, that's fairly easy to imagine how to hack C# to provide such a thing, but if you don't, here is a quick summary:

    For example, the ID3DInclude C++ interface is declared like this :
    // Interface declaration
    DECLARE_INTERFACE(ID3DInclude)
    {
    STDMETHOD(Open)(THIS_ D3D_INCLUDE_TYPE IncludeType, LPCSTR pFileName, LPCVOID pParentData, LPCVOID *ppData, UINT *pBytes) PURE;
    STDMETHOD(Close)(THIS_ LPCVOID pData) PURE;
    };

    DECLARE_INTERFACE is a Windows macro that is defined in ObjBase.h and will expand the previous declaration in C++ like this:

    struct ID3DInclude {
    virtual HRESULT __stdcall Open(D3D_INCLUDE_TYPE IncludeType, LPCSTR pFileName, LPCVOID pParentData, LPCVOID *ppData, UINT *pBytes) = 0;

    virtual HRESULT __stdcall Close(LPCVOID pData) = 0;
    };

    Implementing and using this interface in C++ is straightforward:
    struct MyIncludeCallback : public ID3DInclude {
    virtual HRESULT __stdcall Open(D3D_INCLUDE_TYPE IncludeType, LPCSTR pFileName, LPCVOID pParentData, LPCVOID *ppData, UINT *pBytes) {
    /// code for Open callback
    }

    virtual HRESULT __stdcall Close(LPCVOID pData) {
    /// code for Close callback
    }
    };

    // Usage
    ID3DInclude* include = new MyIncludeCallback();

    // Compile a shader and use our Include provider
    D3DCompile(..., include, ...);

    The hack here is to clearly understand how is layout in memory an instance of ID3DInclude through the Virtual Method Table (VTBL)... Oh, it's really funny to see that the Wikipedia article doesn't use any visual table to represent a virtual table... ok, let's remedy it. If you look at the memory address of an instanciated object, you will find an indirect pointer:

    Fig 1. Virtual Method Table layout in memory
    So from the pointer to a C++ object implementing pure virtual methods, you will find that the first value is a pointer to a VTBL which is shared among the same type of object (here MyIncludeCallback).

    Then in the VTBL, the first value is a pointer to the Open() method implementation in memory. The second to the Close() method.

    According to the calling convention, how does look the declaration of this Open() function, if we had to impleement it in pure C?
    HRESULT __stdcall MyOpenCallbackFunction(void* thisObject, D3D_INCLUDE_TYPE IncludeType, LPCSTR pFileName, LPCVOID pParentData, LPCVOID *ppData, UINT *pBytes) {
    /// code for Open callback
    }
    Simply add a "this object" as the 1st parameter of the callback function (which represents a pointer to the MyIncludeCallback instance in memory) and you have a callback at the function level!

    You should understand now how we can easily hack this to provide a C++ interface callback in C#

    Translation to the C#/.Net world


    The solution is fairly simple. In order to be able to pass a C++ Interface callback implemented in C# to an unmanaged function, we need to replicate how the unmanaged world is going to call the unmanaged functions and how It does expect to have an interface layout in memory.

    First, we need to define the ID3DInclude interface in pure C#:
    public partial interface Include
    {
    /// <summary>
    /// A user-implemented method for opening and reading the contents of a shader #include file.
    /// </summary>
    /// <param name="type">A <see cref="SlimDX2.D3DCompiler.IncludeType"/>-typed value that indicates the location of the #include file. </param>
    /// <param name="fileName">Name of the #include file.</param>
    /// <param name="parentStream">Pointer to the container that includes the #include file.</param>
    /// <param name="stream">Stream that is associated with fileName to be read. This reference remains valid until <see cref="SlimDX2.D3DCompiler.Include.Close"/> is called.</param>
    /// <unmanaged>HRESULT Open([None] D3D_INCLUDE_TYPE IncludeType,[None] const char* pFileName,[None] LPCVOID pParentData,[None] LPCVOID* ppData,[None] UINT* pBytes)</unmanaged>
    //SlimDX2.Result Open(SlimDX2.D3DCompiler.IncludeType includeType, string fileNameRef, IntPtr pParentData, IntPtr dataRef, IntPtr bytesRef);
    void Open(IncludeType type, string fileName, Stream parentStream, out Stream stream);

    /// <summary>
    /// A user-implemented method for closing a shader #include file.
    /// </summary>
    /// <remarks>
    /// If <see cref="SlimDX2.D3DCompiler.Include.Open"/> was successful, Close is guaranteed to be called before the API using the <see cref="SlimDX2.D3DCompiler.Include"/> interface returns.
    /// </remarks>
    /// <param name="stream">This is a reference that was returned by the corresponding <see cref="SlimDX2.D3DCompiler.Include.Open"/> call.</param>
    /// <unmanaged>HRESULT Close([None] LPCVOID pData)</unmanaged>
    void Close(Stream stream);
    }

    Clearly, this is not exactly what we have in C++... but this is how we would use it... through the usage of Stream. An implementation of this interface would provide a Stream for a particular file to include (most of a time, that could be as simple as stream = new FileStream(fileName)).

    This interface is public in the C#/.Net API... but internally we are going to use a wrapper of this interface that is going to create manually the object layout in memory as well as the VTBL. This is done in this simple constructor:

    /// <summary>
    /// Internal Include Callback
    /// </summary>
    internal class IncludeCallback
    {
    public IntPtr NativePointer;
    private Include _callback;
    private OpenCallBack _openCallBack;
    private CloseCallBack _closeCallback;

    public IncludeCallback(Include callback)
    {
    _callback = callback;
    // Allocate object layout in memory
    // - 1 pointer to VTBL table
    // - following that the VTBL itself - with 2 function pointers for Open and Close methods
    _nativePointer = Marshal.AllocHGlobal(IntPtr.Size * 3);

    // Write pointer to vtbl
    IntPtr vtblPtr = IntPtr.Add(_NativePointer, IntPtr.Size);
    Marshal.WriteIntPtr(_NativePointer, vtblPtr);
    _openCallBack = new OpenCallBack(Open);
    Marshal.WriteIntPtr(vtblPtr, Marshal.GetFunctionPointerForDelegate(_openCallBack ));
    _closeCallBack = new CloseCallBack(Close);
    Marshal.WriteIntPtr(IntPtr.Add(vtblPtr, IntPtr.Size), Marshal.GetFunctionPointerForDelegate(_closeCallBack));
    }

    You can clearly see from the previous code that we are allocating a an unmanaged memory that will hold the object VTBL pointer and the VTBL itself... Because we don't need to make 2 allocation (one for the object's vtbl_ptr/data, one for the vtbl), we are laying out the VTBL just after the object itself, like this:


    The declaration of the C# delegates are then straightforward from the C++ declaration:
    [UnmanagedFunctionPointer(CallingConvention.StdCall)]
    private delegate SlimDX2.Result OpenCallBack(IntPtr thisPtr, SlimDX2.D3DCompiler.IncludeType includeType, IntPtr fileNameRef, IntPtr pParentData, ref IntPtr dataRef, ref int bytesRef);

    [UnmanagedFunctionPointer(CallingConvention.StdCall)]
    private delegate SlimDX2.Result CloseCallBack(IntPtr thisPtr, IntPtr pData);
    You just have to implement the Open and Close method in the wrapper and redirect the calls to the managed Include callback, et voila!

    Then after, when calling an unmanaged function that required this callback, you just have to wrap an Include instance with the callback like this:
    Include myIncludeInstance = ... new ...;

    IncludeCallback callback = new IncludeCallback(callback);

    // callback.NativePointer is a pointer to the object/vtbl allocated structure
    D3D.Compile(..., callback.NativePointer, ...);

    Of course, the IncludeCallback is not visible from the public API but is used internally. From a public interface POV, here is how you would use it:
    using System;
    using System.IO;
    using SlimDX2.D3DCompiler;

    namespace TestCallback
    {
    class Program
    {
    class MyIncludeCallBack : Include
    {
    public void Open(IncludeType type, string fileName, Stream parentStream, out Stream stream)
    {
    stream = new FileStream(fileName, FileMode.Open);
    }

    public void Close(Stream stream)
    {
    stream.Close();
    }
    }

    static void Main(string[] args)
    {
    var include = new MyIncludeCallBack();
    string value = ShaderBytecode.PreprocessFromFile("test.fx", null, include);
    Console.WriteLine(value);
    }
    }
    }

    You can have a look at the complete source code here.
              A new managed .NET/C# Direct3D 11 API generated from DirectX SDK headers        
    I have been quite busy since the end of august, personally because I'm proud to announce the birth of my daughter! (and his older brother, is somewhat, asking a lot more attention since ;) ) and also, working hard on an exciting new project based on .NET and Direct3D.

    What is it? Yet Another Triangle App? Nope, this is in fact an entirely new .NET API for Direct3D11, DXGI, D3DCompiler that is fully managed without using any mixed assemblies C++/CLI but having similar performance than a true C++/CLI API (like SlimDX). But the main characteristics and most exciting thing about this new wrapper is that the whole code marshal/interop is fully generated from the DirectX SDK headers, including the MSDN documentation.

    The current key features and benefits of this approach are:

    • API is generated from DirectX SDK headers : the mapping is able to perform "complex transformation", extracting all relevant information like enumerations, structures, interfaces, functions, macro definitions, guids from the C++ source headers. For example, the mapping process is able to generated properties for interfaces or inner group interface like the one you have in SlimDX : meaning that instead of having a "device.IASetInputLayout" you are able to write "device.InputAssembler.InputLayout = ...".
    • Full support of Direct3D 11, DXGI 1.0/1.1, D3DCompiler API : Due to the whole auto-generated process, the actual coverage is 100%. Although, I have limited the generated code to those library but that could be extended to others API quite easily (like XAudio2, Direct2D, DirectWrite... etc.).
    • Pure managed .NET API : assemblies are compiled with AnyCpu target. You can run your code on a x64 or a x86 machine with the same assemblies. 
    • API Extensibility The generated code is in C#, all the types are marked "partial" and are easily extensible to provide new helpers method. The code generator is able to hide some methods/types internally in order to use them in helper methods and to hide them from the public api.
    • C++/CLI Speed : the framework is using a genuine way to avoid any C++/CLI while still achieving comparable performance.
    • Separate assemblies : a core assembly containing common classes and an assembly for each subgroup API (Direct3D, DXGI, D3DCompiler)
    • Lightweight assemblies : generated assemblies are lightweight, 300Ko in total, 70Ko compressed in an archive (similar assemblies in C++/CLI would be closer to 1Mo, one for each architecture, and depend from MSVCRT10)
    • API naming convention very close to SlimDX API (To make it 100% equals would just require to specify the correct mapping names while generating the code)
    • Raw DirectX object life management : No overhead of ObjectTable or RCW mechanism, the API is using direct native management with classic COM method "Release". Currently, instead of calling Dispose, you should call Release (and call AddRef if you are duplicating references, like in C++). I might evaluate how to safely integrate Dispose method call. 
    • Easily obfuscatable : Due to the fact the framework is not using any mixed assemblies
    • DirectX SDK Documentation integrated in the .NET xml comments : The whole API is also generated with the MSDN documentation. Meaning that you have exactly the same documentation for DirectX and for this API (this is working even for method parameters, remarks, enum items...etc.). Reference to other types inside the documentation are correctly linked to the .NET API. 
    • Prototype for a partial support of the Effects11 API in full managed .NET.
    If you have been working with SlimDX, some of the features here could sound familiar and you may wonder why another .DirectX NET API while there is a great project like SlimDX? Before going further in the detail of this wrapper and how things are working in the background, I'm going to explain why this wrapper could be interesting.

    I'm also currently not in the position to release it for the reason that I don't want to compete with SlimDX. I want to see if SlimDX Team would be interested to work together with this system, a kind of joint-venture. There are still lots of things to do, improving the mapping, making it more reliable (the whole code here has been written in a urge since one month...) but I strongly believe that this could be a good starting point to SlimDX 2, but I might be wrong... also, SlimDX could think about another road map... So this is a message to the SlimDX Team : Promit, Josh, Mike, I would be glad to hear some comments from you about this wrapper (and if you want, I could send you the generated API so that you could look at it and test it!)

    [Updated 30 November 2010]
    This wrapper is now available from SharpDX. Check this post.
    [/Updated]

    This post is going to be quite long, so if you are not interested by all the internals, you could jump to the sample code at the end.

    An attempt to a SlimDX next gen


    First of all, is it related to 4k or 64k intros? (an usual question here, mostly question for myself :D) Well, while I'm still working to make things smaller, even in .NET, I would like to work on a demo based on .NET (but with lots of procedurally generated textures and music).  I have been evaluating both XNA and SlimDX, and in September, I have even been working on a XNA like API other SlimDX / Direct3D 11 that was working great, simplifiying a lot the code, while still having benefits to use new D3D11 API (Geometry shaders, Compute Shaders...etc.). I will talk later about this "Demo" layer API.

    As a demo maker for tiny executable, even in .NET, I found that working with SlimDX was not the best option : even stripping the code, recompiling the SlimDX to keep only DirectX11/DXGI&co, I had a roughly 1Mo dll (one for each architecture) + a dependency to MSVRT10 which is a bit annoying. Even if I would like to work on a demo (with less size constraint), I didn't want to have a 100Ko exe and a 1Mo compressed of external dlls...

    Also, I read some of Josh's thoughts about SlimDX 2 : I was convinced about the need for separated assemblies and simplified life object management. But was not convinced by the need to use "interfaces" for the new API and not really happy about still having some platform specific mixed-assemblies in order to support correctly 32/64 bit architecture (with a simple delay loading).

    What is supposed to address SlimDX 2 over SlimDX?
    • Making object life management closer to the real thing (no Dispose but raw Release instead) 
    • Multiple assemblies
    • Working on the API more with C# than in C++/CLI
    • Support automatic platform architecture switching (running transparently an executable on a x86 and x64 machine without recompiling anything).
    Recall that I was slightly working around August on parsing the SDK headers based on Boost::Wave V2.0. My concern was that I have developed a SlimDX like interface in C++ for Ergon demo, but I found the process to be very laborious, although very straightforward, while staying in the same language as DirectX... Thinking more about it, and because I wanted to do more work in 3D and C# (damn it, this language is SOOO cool and powerful compared to C++)... I found that It would be a great opportunity to see if it's not possible to extract enough information from the SDK headers in order to generate a Direct3D 11 .NET/C# API.

    And everything has been surprisingly very fast : extraction of all the code information from the SDK C++ headers file was in fact quite easy to code, in few days... and generating the code was quite easy (I have to admit that I have a strong experience in this kind of process, and did similar work, around ten years ago, in Java, delivering an innovative Java/COM bridge layer for the company I was working at that time, much safer than Sun Java/COM layer that was buggy and much more powerfull, supporting early binding, inheritance, documentation... etc).

    In fact, with this generating process, I have been able to address almost all the issue that were expected to  be solved in SlimDX 2, and moreover, It's going a bit further because the process is automated and It's supporting the platform x86/x64 without requiring any mixed assemblies.

    In the following sections, I'm going to deeply explain the architecture, features, internals and mapping rules used to generate this new .Net wrapper (which has currently the "SharpDX" code name).

    Overview


    In order to generate Managed .NET API for DirectX from the SDK headers, the process is composed of 3 main steps:
    1. Convert from the DirectX SDK C++ Headers to an intermediate format called "XIDL" which is a mix of XML and "IDL". This first part is responsible to reverse engineering the headers, extract back all existing and useful information (more on the following section), and produce a kind of IDL (Intermediate Definition Language). In fact, If I had access to the IDL used internally at Microsoft, It wouldn't have been necessary to write this whole part, but sadly, the DirectX 11 IDL is not available, although you can clearly verify from the D3D11.h that this file is generated from an IDL. This module is also responsible to access MSDN website and crawl the needed documentation, and associate it with all the languages elements (structures, structures fields, enums, enum items, interfaces, interfaces methods, method parameters...etc.). Once a documentation has been retrieved, It's stored on the disk and is not retrieved next time the conversion process is re-runned.
    2. Convert from the XIDL file to several C# files. This part is responsible to perform from a set of mapping rules a translation of C++ definition to C# definition. The mapping is as complex as identifying which include would map to assembly/namespace, which type could be moved to an assembly/namespace, how to rename the types,functions, fields, parameters, how to add missing information from the XIDL file...etc. The current mapping rules are express in less then 600 lines of C# code... There is also a trick here not described in the picture. This process is also generating a small interop assembly which is only used at compile time, dynamically generated at runtime and responsible for filling the gap between what is possible in C# and what you can do in C++/CLI (there are lots of small usefull IL bytecode instructions generated in C++/CLI that are not accessible from C#, this assembly is here for that....more on this in the Convert to XIDL section).
    3. Integrate the generated files in several Visual Studio projects and a global solution. Each project is generating an assembly. It is where you can add custom code that could not be generated (like Vector3 math functions, or general framework objects like a ComObject). The generated code is also fully marked with "partial" class, one of the cool things of C# : you can have multiple files contributing to the same class declaration... making things easy to have generated code on the side of custom hand made code. 


    Revert DirectX IDL from headers


    Unfortunately, I have not found a workable C preprocessor written in .NET, and this part has been a bit laborious to make it work. The good thing is that I have found Boost Wave 2.0 in C++. The bad thing is that this library, written in a heavy boost-STL-templatizer philosophy was really hard to manage to work under a C++/CLI DLL. Well, the principle was to embed Boost Wave in a managed DLL, in order to use it from C#... after several attempts, I was not able to build it with C++/CLI .NET 4.0. So I ended up in a small dll COM wrapper around BoostWave, and a thin wrapper in .NET calling this dll. Compiling Boost-Wave was also sometimes a nightmare : I tried to implement my own provider of stream for Wave... but dealing with a linker error that was freezing VS2010 for 5s to display the error (several Ko of a single template cascaded error)... I have found somewhere on the Wave release that It was in fact not supported... but wow, templates are supposed to make life easier... but the way It is used gives a really bad feeling... (and I'm not a beginner in C++ template...)

    Anyway, after succeeding to wrap BoostWave API, I had a bunch of tokens to process. I started to wrote a handwritten C/C++ parser, which is targeted to read well-formed DirectX headers and nothing else. It was quite tricky sometimes, the code is far from being failsafe, but I succeed to parse correctly most of the DirectX headers. During the mapping to C#, I was able to find a couple of errors in the parser that were easy to fix.

    In the end, this parser is able to extract from the headers:
    • Enumerations, Structures, Interfaces, Functions, Typedefs
    • Macros definitions
    • GUIDs
    • Include dependency
    The whole data is stored in a C# model that is marshaled in XML using WCF (DataMember, DataContract), which make the code really easy to write, not much intrusive and you can serialize and deserialize to XML. For example, a CppType is defined like this:

    //
    using System.Runtime.Serialization;
    using System.Text;

    namespace SharpDX.Tools.XIDL
    {
    [DataContract]
    public class CppType : CppElement
    {
    [DataMember(Order=0)]
    public string Type { get; set;}
    [DataMember(Order=1)]
    public string Specifier { get; set; }
    [DataMember(Order=2)]
    public bool Const { get; set; }
    [DataMember(Order = 3)]
    public bool IsArray { get; set; }
    [DataMember(Order=4)]
    public string ArrayDimension { get; set; }

    The model is really lightweight, no fancy methods and easy to navigate in.

    The process is also responsible to get documentation for each C++ items (enumerations, structures, interfaces, functions). The documentation is requested to MSDN while generating all the types. That was also a bit tricky to parse, but in the end, the class is very small (less than 200 lines of C# code). Downloaded documentation is stored on the disk and is used for later re-generation of the parsing.

    The generated XML model is taking around 1.7Mo for DXGI, D3D11, D3DX11, D3DCompiler includes and looks like this:

          <Interfaces>
            <CppInterface>
              <Name>ID3D11DeviceChildName>
              <Description>A device-child interface accesses data used by a device.Description>
              <Remarks i:nil="true" />
              <Parent>IUnknownParent>
              <Methods>
                <CppMethod>
                  <Name>GetDeviceName>
                  <Description>Get a pointer to the device that created this interface.Description>
                  <Remarks>Any returned interfaces will have their reference count incremented by one, so be sure to call ::release() on the returned pointer(s) before they are freed or else you will have a memory leak.Remarks>
                  <ReturnType>
                    <Name i:nil="true" />
                    <Description>voidReturns nothing.Description>
                    <Remarks i:nil="true" />
                    <Type>voidType>
                    <Specifier>Specifier>
                    <Const>falseConst>
                    <IsArray>falseIsArray>
                    <ArrayDimension i:nil="true" />
                  ReturnType>
                  <CallingConvention>StdCallCallingConvention>
                  <Offset>3Offset>
                  <Parameters>
                    <CppParameter>
                      <Name>ppDeviceName>
                      <Description>Address of a pointer to a device (see {{ID3D11Device}}).Description>
                      <Remarks i:nil="true" />
                      <Type>ID3D11DeviceType>
                      <Specifier>**Specifier>
                      <Const>falseConst>
                      <IsArray>falseIsArray>
                      <ArrayDimension i:nil="true" />
                      <Attribute>OutAttribute>
                    CppParameter>
                  Parameters>
                CppMethod>

    One of the most important thing in the DirectX headers that are required to develop a reliable code generator is the presence of C+ windows specific attributes : all the methods are prefix by macros __out __in __out_opt , __out_buffer... etc. All those attributes are similar to C# attributes and are explaining how to interpret the parameter. If you take the previous code, there is a method GetDevice that is returning a ID3D11Device through a [out] parameter. The [Out] parameter is extremely important here, as we know exactly how to use it. Same thing when you have a pointer which is in fact a buffer : with the attributes, you know that this is an array of elements behind the pointer...

    Although, I have discovered that some functions/methods sometimes are lacking some attributes.... but hopefully, the next process (the mapping from XIDL to C#) is able to add missing information like this.


    As I said, the current implementation is far from being failsafe and would probably require more testing on other headers files. At least, the process is correctly working on a subset of the DirectX headers.


    Generate C# from IDL


    This part of the process has been a lot more time consuming. I started with enums, which were quite straightforward to manage. Structures were asking a bit more work, as there is some need for some custom marshalling for some structures that cannot marshal easily... Then interfaces methods were the most difficult part, correctly handling all parameters case was not easy...

    The process of generating the C# code is done in 3 steps:
    1. Reading XIDL model and prepare the model for mapping: remove types, add information to some methods. 
    2. Generate a C# model with the XIDL model and a set of mapping rules
    3. Generate C# files from the C# model. I have used T4 "Text Template Transformation Toolkit" engine as a text templatizer, which is part of VS2010 and is really easy to use, integrated in VS2010 with a third party syntax highlighting plugin. 
    This step is also responsible to generate an interop assembly which is emiting directly some .NET IL bytecodes through the System.Reflection.Emit. This interop assembly is the trick to avoid the usage of a C++/CLI mixed assembly

    Preamble) How to avoid the usage of C++/CLI in C#


    If you look at some generated C++/CLI code with Reflector, you will see that most of the code is in fact a pure IL bytecode, even when there is a call to a native function or native methods...

    The trick here is that there are a couple of IL instructions that are used internally by C# but not exposed to the language.

    1) The instruction "calli"

    This instruction is responsible to call directly an unmanaged function, without going through the pinvoke/interop  layer (in fact, pinvoke is calling in the end "calli", but is performing a much more complex marshaling of the parameters, structures...)

    What I need was a way to call an umanaged function/methods without going through the pinvoke layer, and "calli" is exactly here for this. Now, suppose that we could generate a small assembly at compile time and at runtime that would be responsible for handling those calli function, we would not have to use anymore C++/CLI for this.

    For example, suppose that I want to call a C++ method of an interface which takes an integer as a parameter, something like :
    interface IDevice : IUnknown {
    void Draw(int count);
    }
    I only need a function in C# that is able to directly call this method, without going the pinvoke layer, with a pointer to the C++ IDevice object and the offset of the method in the vtbl (offset will be expressed in bytes, for a x86 architecture here) :
    class Interop {
    public static unsafe void CalliVoid(void* thisObject, int vtblOffset, int arg0);
    }

    // A call to IDevice
    void* ptrToIDevice = ...;

    // A Call to the method Draw, number 3 in the vtbl order (starting at 0 to 2 for IUnknown methods)
    Interop.CalliVoid(ptrToIDevice, /* 3 * sizeof(void* in x86) */ 3 * 4 , /* count */4 );


    The IL bytecode content of this method for a x64 architecture would be typically in C++/CLI like this:
    .method public hidebysig static void CalliVoid(void* arg0, int32 arg1, int32 arg2) cil managed
    {
    .maxstack 4
    L_0000: ldarg.0 // Load (0) this arg (1st parameter for native method)
    L_0001: ldarg.2 // Load (1) count arg
    L_0002: ldarg.1 // Offset in vtbl
    L_0003: conv.i // Convert to native int
    L_0004: dup //
    L_0005: add // Offset = offset * 2 (only for x64 architecture)
    L_0006: ldarg.0 //
    L_0007: ldind.i // Load vtbl poointer
    L_0008: add // pVtbl = pVtbl + offset
    L_0009: ldind.i // load function from the vtbl fointer
    L_000a: calli method unmanaged stdcall void *(void*, int32)
    L_000f: ret
    }

    This kind of code will be automatically inlined by the JIT (which is, from SCCLI/Rotor sourcecode, inlining functions that are taking less than 25 bytes of bytecode).

    If you look at a C++/CLI assembly, you will see lots of "calli" instructions.

    So in the end, how this trick is used? Because the generator knows all the methods from all the interfaces, it is able to generate a set of all possible calling conventions to unmanaged object. In fact, the XIDLToCSharp generator is responsible to generate an assembly containing all the interop methods (around 66 methods using Calli) :
    public class Interop
    {
    private Interop();
    public static unsafe float CalliFloat(void* arg0, int arg1, void* arg2);
    public static unsafe int CalliInt(void* arg0, int arg1);
    public static unsafe int CalliInt(void* arg0, int arg1, int arg2);
    public static unsafe int CalliInt(void* arg0, int arg1, void* arg2);
    public static unsafe int CalliInt(void* arg0, int arg1, long arg2);
    public static unsafe int CalliInt(void* arg0, int arg1, int arg2, int arg3);
    public static unsafe int CalliInt(void* arg0, int arg1, long arg2, int arg3);
    public static unsafe int CalliInt(void* arg0, int arg1, void* arg2, int arg3);
    public static unsafe int CalliInt(void* arg0, int arg1, void* arg2, void* arg3);
    public static unsafe int CalliInt(void* arg0, int arg1, int arg2, void* arg3);
    public static unsafe int CalliInt(void* arg0, int arg1, IntPtr arg2, void* arg3);
    public static unsafe int CalliInt(void* arg0, int arg1, IntPtr arg2, int arg3);
    public static unsafe int CalliInt(void* arg0, int arg1, int arg2, void* arg3, int arg4);
    public static unsafe int CalliInt(void* arg0, int arg1, int arg2, void* arg3, void* arg4);
    public static unsafe int CalliInt(void* arg0, int arg1, void* arg2, int arg3, void* arg4);
    public static unsafe int CalliInt(void* arg0, int arg1, int arg2, int arg3, void* arg4);
    public static unsafe int CalliInt(void* arg0, int arg1, void* arg2, void* arg3, void* arg4);
    public static unsafe int CalliInt(void* arg0, int arg1, IntPtr arg2, void* arg3, void* arg4);
    public static unsafe int CalliInt(void* arg0, int arg1, void* arg2, void* arg3, int arg4);
    public static unsafe int CalliInt(void* arg0, int arg1, int arg2, int arg3, void* arg4, void* arg5);
    public static unsafe int CalliInt(void* arg0, int arg1, void* arg2, void* arg3, int arg4, int arg5);
    //
    // ...[stripping Calli x methods here]...
    //
    public static unsafe void CalliVoid(void* arg0, int arg1, int arg2, void* arg3, void* arg4, int arg5, int arg6, void* arg7);
    public static unsafe void CalliVoid(void* arg0, int arg1, void* arg2, float arg3, float arg4, float arg5, float arg6, void* arg7);
    public static unsafe void CalliVoid(void* arg0, int arg1, int arg2, void* arg3, void* arg4, int arg5, int arg6, void* arg7, void* arg8);
    public static unsafe void CalliVoid(void* arg0, int arg1, void* arg2, int arg3, int arg4, int arg5, int arg6, void* arg7, int arg8, void* arg9);
    public static unsafe void* Read<T>(void* pSrc, ref T data) where T: struct;
    public static unsafe void* Read<T>(void* pSrc, T[] data, int offset, int count) where T: struct;
    public static unsafe void* Write<T>(void* pDest, ref T data) where T: struct;
    public static unsafe void* Write<T>(void* pDest, T[] data, int offset, int count) where T: struct;
    public static void memcpy(void* pDest, void* pSrc, int Count);
    }

    This assembly is used at compile time but is not distributed at runtime. Instead, this assembly is dynamically generated at runtime in order to support difference in bytecode between x86 and x64 (in the calli example, we need to multiply by 2 the offset into the vtbl table, because the sizeof of a pointer in x64 is 8 bytes).

    2) The instruction "sizeof" for generic

    Although the Calli is the real trick that makes it possible to have a managed way to call unmanaged method without using pinvoke, I have found a couple of other IL bytecode that is necessary to have the same features than in C++/CLI.

    The other one is sizeof for generic. In C#, we know that there is a sizeof, but while trying to replicate the DataStream class from SlimDX in pure C#, I was not able to write this kind code :
    public class DataStream
    {
    // Unmarshal a struct from a memory location
    public T Read<T>() where T: struct {
    T myStruct = default(T);
    memcpy(&mystruct, &m_buffer, sizeof(T));
    return myStruct;
    }
    }

    In fact, under C#, the sizeof is not working for a generic, even if you specify that the generic is a struct. Because C# cannot constraint the struct to contains only blittable fields (I mean, It could, but It doesn't try to do it), they don't allow to take the size of a generic struct... that was annoying, but because with pure IL instruction, It's working well and I was already generating the Interop assembly, I was free to add whatever methods with custom bytecode to fill the gap...

    In the end, the interop code to read a generic struct from a memory location looks like this :
    // This method is reading a T struct from pSrc and returning the address : pSrc + sizeof(T)
    .method public hidebysig static void* Read<valuetype .ctor T>(void* pSrc, !!T& data) cil managed
    {
    .maxstack 3
    .locals init (
    [0] int32 num,
    [1] !!T* pinned localPtr)
    L_0000: ldarg.1
    L_0001: stloc.1
    L_0002: ldloc.1
    L_0003: ldarg.0
    L_0004: sizeof !!T
    L_000a: conv.i4
    L_000b: stloc.0
    L_000c: ldloc.0
    L_000d: unaligned 1 // Mandatory for x64 architecture
    L_0010: nop
    L_0011: nop
    L_0012: nop
    L_0013: cpblk // Memcpy
    L_0015: ldloc.0
    L_0016: conv.i
    L_0017: ldarg.0
    L_0018: add
    L_0019: ret
    }

    3) The instruction "cpblk", memcpy in IL

    In the previous function, you can see the use of "cpblk" bytecode instruction. In fact, when you are looking at a C++/CLI method using a memcpy, It will not use the memcpy from the C CRT but directly the IL instruction performing the same task. This IL instruction is faster than using anykind of interop, so I made it available to C# through the Interop assembly

    I) Prepare XIDL model for mapping


    So the 1st step in the XIDLToCSharp process is to prepare the XIDL model to be more mapping friendly. This step is essentially responsible to:
    • Add missing C++ attributes (In, InOut, Buffer) information to some method's parameter
    • Replace the type of some method parameters : for example in DirectX, there are lots of parameter that are taking a flags, which is in fact an already declared enum... but for some unknown reason, they are declaring the method with an "int" instead of using the enum...
    • Remove some types. For example,  the D3D_PRIMITIVE_TOPOLOGY is holding a bunch of D3D11 and D3D10 enum, duplicating D3D_PRIMITIVE enums... So I'm removing them.
    • Add some tag directly on the XIDL model in order to ease the next mapping process : those tags are for example used for tagging the C# visibility of the method, or forcing a method to not be interpreted  as a "property")
    // Read the XIDL model
    CppIncludeGroup group = CppIncludeGroup.Read("directx_idl.xml");

    group.Modify<CppParameter>("^D3DX11.*?::pDefines", Modifiers.ParameterAttribute(CppAttribute.In | CppAttribute.Buffer | CppAttribute.Optional));

    // Modify device Flags for D3D11CreateDevice to use D3D11_CREATE_DEVICE_FLAG
    group.Modify<CppParameter>("^D3D11CreateDevice.*?::Flags$", Modifiers.Type("D3D11_CREATE_DEVICE_FLAG"));

    // ppFactory on CreateDXGIFactory.* should be Attribute.Out
    group.Modify<CppParameter>("^CreateDXGIFactory.*?::ppFactory$", Modifiers.ParameterAttribute(CppAttribute.Out));

    // pDefines is an array of Macro (and not just In)
    group.Modify<CppParameter>("^D3DCompile::pDefines", Modifiers.ParameterAttribute(CppAttribute.In | CppAttribute.Buffer | CppAttribute.Optional));
    group.Modify<CppParameter>("^D3DPreprocess::pDefines", Modifiers.ParameterAttribute(CppAttribute.In | CppAttribute.Buffer | CppAttribute.Optional));

    // SwapChain description is mandatory In and not optional
    group.Modify<CppParameter>("^D3D11CreateDeviceAndSwapChain::pSwapChainDesc", Modifiers.ParameterAttribute(CppAttribute.In));

    // Remove all enums ending with _FORCE_DWORD, FORCE_UINT
    group.Modify<CppEnumItem>("^.*_FORCE_DWORD$", Modifiers.Remove);
    group.Modify<CppEnumItem>("^.*_FORCE_UINT$", Modifiers.Remove);

    You can see that the pre-mapping (and the mapping) is using intensively regular expression for matching names, which is a very convenient way to perform some kind of XPATH request with Regex expressions.

    II) Generate C# model from XIDL and mapping rules


    This process is taking the pre-process XIDL and is generating a C# model (a subset of the C# model in memory), adding mapping information and preparing things to make it easier to use it from the T4 templatizer engine.

    In order to generate the C# model from DirectX, the generator needs a couple of mapping rules.

    1) Mapping an include to an assembly / namespace

    This rules is defining a default dispatching of types to assembly / namespace. It will associate source headers include (the name of the .h, without the extension).
    // Namespace mapping 

    // Map dxgi include to assembly SharpDX.DXGI, namespace SharpDX.DXGI
    gen.MapIncludeToNamespace("dxgi", "SharpDX.DXGI");
    gen.MapIncludeToNamespace("dxgiformat", "SharpDX.DXGI");
    gen.MapIncludeToNamespace("dxgitype", "SharpDX.DXGI");

    // Map D3DCommon include to assembly SharpDX, namespace SharpDX.Direct3D
    gen.MapIncludeToNamespace("d3dcommon", "SharpDX.Direct3D", "SharpDX");

    gen.MapIncludeToNamespace("d3d11", "SharpDX.Direct3D11");
    gen.MapIncludeToNamespace("d3dx11", "SharpDX.Direct3D11");
    gen.MapIncludeToNamespace("d3dx11core", "SharpDX.Direct3D11");
    gen.MapIncludeToNamespace("d3dx11tex", "SharpDX.Direct3D11");
    gen.MapIncludeToNamespace("d3dx11async", "SharpDX.Direct3D11");
    gen.MapIncludeToNamespace("d3d11shader", "SharpDX.D3DCompiler");
    gen.MapIncludeToNamespace("d3dcompiler", "SharpDX.D3DCompiler");

    2) Mapping a particular type to an assembly / namespace

    It is also necessary to override the default include to assembly/namespace dispatching for some particular types. This rules is doing this.
    gen.MapTypeToNamespace("^D3D_PRIMITIVE$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_CBUFFER_TYPE$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_RESOURCE_RETURN_TYPE$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_SHADER_CBUFFER_FLAGS$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_SHADER_INPUT_TYPE$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_SHADER_VARIABLE_CLASS$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_SHADER_VARIABLE_FLAG$S", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_SHADER_VARIABLE_TYPE$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_TESSELLATOR_DOMAIN$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_TESSELLATOR_PARTITIONING$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_TESSELLATOR_OUTPUT_PRIMITIVE$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_SHADER_INPUT_FLAGS$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_NAME$", "SharpDX.D3DCompiler");
    gen.MapTypeToNamespace("^D3D_REGISTER_COMPONENT_TYPE$", "SharpDX.D3DCompiler");

    The previous code is instructing the generator to move some D3D types to the SharpDX.D3DCompiler namespace (and assembly). Those types are in fact more related to Shader reflection and are associated with the D3DCompiler assembly (I took the same design choice from SlimDX, although we could think about another mapping).

    3) Mapping a C++ type to a custom C# type

    It is sometimes necessary to map a C++ type to a non generated C# type. For example, there is the C++ "RECT" structure which is not stritcly equivalent to the System.Drawing.Rectangle (the RECT struct is using the Left,Top,Right,Bottom fields instead of Left,Top,Width,Height for System.Drawing.Rectangle). This mapping is able to define a custom mapping. The SharpDX.Rectangle is not generated by the generator but is defined in the SharpDX assembly project (last part).
    var rectType = new CSharpStruct();
    rectType.Name = "SharpDX.Rectangle";
    rectType.SizeOf = 4*4;
    gen.MapCppTypeToCSharpType("RECT", rectType); //"SharpDX.Rectangle", 4 * 4, false, true);

    4) Mapping a C++ name to a C# name
    The renaming rules are quite rich. The XIDLToCSharp provides a default renaming mechanism that respect the CamelCase convention, but there are some exceptions that need to be addressed. For example:
    // Rename DXGI_MODE_ROTATION to DisplayModeRotation
    gen.RenameType(@"^DXGI_MODE_ROTATION$","DisplayModeRotation");
    gen.RenameType(@"^DXGI_MODE_SCALING$", "DisplayModeScaling");
    gen.RenameType(@"^DXGI_MODE_SCANLINE_ORDER$", "DisplayModeScanlineOrder");

    // Use regular expression to take the part of some names...
    gen.RenameType(@"^D3D_SVC_(.*)", "$1");
    gen.RenameType(@"^D3D_SVF_(.*)", "$1");
    gen.RenameType(@"^D3D_SVT_(.*)", "$1");
    gen.RenameType(@"^D3D_SIF_(.*)", "$1");
    gen.RenameType(@"^D3D_SIT_(.*)", "$1");
    gen.RenameType(@"^D3D_CT_(.*)", "$1");

    For structures and enums that are using the "_" underscore to separate name subpart, you can let XIDLToCSharp rename correctly each subpart, while still being able to specify how a subpart can be rename:
    // Expand sub part between underscore
    gen.RenameTypePart("^DESC$", "Description");
    gen.RenameTypePart("^CBUFFER$", "ConstantBuffer");
    gen.RenameTypePart("^TBUFFER$", "TextureBuffer");
    gen.RenameTypePart("^BUFFEREX$", "ExtendedBuffer");
    gen.RenameTypePart("^FUNC$", "Function");
    gen.RenameTypePart("^FLAG$", "Flags");
    gen.RenameTypePart("^SRV$", "ShaderResourceView");
    gen.RenameTypePart("^DSV$", "DepthStencilView");
    gen.RenameTypePart("^RTV$", "RenderTargetView");
    gen.RenameTypePart("^UAV$", "UnorderedAccessView");
    gen.RenameTypePart("^TEXTURE1D$", "Texture1D");
    gen.RenameTypePart("^TEXTURE2D$", "Texture2D");
    gen.RenameTypePart("^TEXTURE3D$", "Texture3D");

    With this rules, for example with a struct named as "BLABLA_DESC", the DESC part will be expand to "Description", resulting in the C# name "BlablaDescription".

    5) Change Field type mapping in C#

    Again, there are lots of enums in DirectX that are not used in the structures. For example, if you take the D3D11_BUFFER_DESC, all enums are declared as int instead of using their respective enums.

    This mapping rules is responsible to change the destination type for a field:
    gen.ChangeStructFieldTypeToNative("D3D11_BUFFER_DESC", "BindFlags", "D3D11_BIND_FLAG");
    gen.ChangeStructFieldTypeToNative("D3D11_BUFFER_DESC", "CPUAccessFlags", "D3D11_CPU_ACCESS_FLAG");
    gen.ChangeStructFieldTypeToNative("D3D11_BUFFER_DESC", "MiscFlags", "D3D11_RESOURCE_MISC_FLAG");

    6) Generate enums from C++ macros, improving enums

    Again, DirectX SDK is not consistent with enums. Sometimes there are some enums that are in fact defined with some macro definition, which makes intellisense experience inexistent...

    XIDLToCSharp is able to create an enum from a set of macros definitions
    // Create enums from macro definitions
    // Create the D3DCOMPILE_SHADER_FLAGS C++ type from the D3DCOMPILE_.* macros
    gen.CreateEnumFromMacros(@"^D3DCOMPILE_[^E][^F].*", "D3DCOMPILE_SHADER_FLAGS");
    gen.CreateEnumFromMacros(@"^D3DCOMPILE_EFFECT_.*", "D3DCOMPILE_EFFECT_FLAGS");
    gen.CreateEnumFromMacros(@"^D3D_DISASM_.*", "D3DCOMPILE_DISASM_FLAGS");

    There are also some tiny things to adjust to existing enums, like adding a "None=0" enum item for some flags.

    7) Move interface methods to inner interfaces in C#

    If you have been using Direct3D 11, you have notice that all methods for each stages are prefix with the stage abbreviation, making for example the ID3D11DeviceContext interface quite ugly to use, ending in some code like this:
    deviceContext.IASetInputLayout(inputlayout); 

    SlimDX did something really nice : they have created for each pipeline stage (IA for InputAssembler, VS for VertexShader) a property accessor to an interface that is exposing the method of this stage, resulting in an improved readability and a much better intellisense experience.
    deviceContext.InputAssembler.InputLayout = inputlayout; 

    In the XIDL2CSharp, there is a rules to handle such a case, and is simple as writing this:
    // Map all IA* methods to the internal interface InputAssemblerStage with the acessor property InputAssembler, using the method name $1 (extract from the regexp)
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::IA(.*)", "InputAssemblerStage", "InputAssembler", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::VS(.*)", "VertexShaderStage", "VertexShader", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::PS(.*)", "PixelShaderStage", "PixelShader", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::GS(.*)", "GeometryShaderStage", "GeometryShader", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::SO(.*)", "StreamOutputStage", "StreamOutput", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::DS(.*)", "DomainShaderStage", "DomainShader", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::HS(.*)", "HullShaderStage", "HullShader", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::RS(.*)", "RasterizerStage", "Rasterizer", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::OM(.*)", "OutputMergerStage", "OutputMerger", "$1");
    gen.MoveMethodsToInnerInterface("ID3D11DeviceContext::CS(.*)", "ComputeShaderStage", "ComputeShader", "$1");

    8) Dispatch method to function group

    DirectX C++ functions are mapped to a set of function group and an associated DLL. For example, it is possible to specify that all D3D11.* methods will map to a class D3D11 containing all the associated methods.
    // Function group
    var d3dCommonFunctionGroup = gen.CreateFunctionGroup("SharpDX", "SharpDX.Direct3D", "D3DCommon");
    var dxgiFunctionGroup = gen.CreateFunctionGroup("SharpDX.DXGI", "SharpDX.DXGI", "DXGI");
    var d3dFunctionGroup = gen.CreateFunctionGroup("SharpDX.D3DCompiler", "SharpDX.D3DCompiler", "D3D");
    var d3d11FunctionGroup = gen.CreateFunctionGroup("SharpDX.Direct3D11", "SharpDX.Direct3D11", "D3D11");
    var d3dx11FunctionGroup = gen.CreateFunctionGroup("SharpDX.Direct3D11", "SharpDX.Direct3D11", "D3DX11");

    // Map All D3D11 functions to D3D11 Function Group
    gen.MapFunctionToFunctionGroup(@"^D3D11.*", "d3d11.dll", d3d11FunctionGroup);

    // Map All D3DX11 functions to D3DX11 Function Group
    gen.MapFunctionToFunctionGroup(@"^D3DX11.*", group.Find<cppmacrodefinition>("D3DX11_DLL_A").FirstOrDefault().StripStringValue, d3dx11FunctionGroup);

    // Map All D3D11 functions to D3D11 Function Group
    string d3dCompilerDll =
    group.Find<cppmacrodefinition>("D3DCOMPILER_DLL_A").FirstOrDefault().StripStringValue;
    gen.MapFunctionToFunctionGroup(@"^D3DCreateBlob$", d3dCompilerDll, d3dCommonFunctionGroup);

    If a DLL has a versionned name (like for D3DXX_xx.dll or D3DCompiler_xx.dll), we are directly retreiving the dll name from a macro!


    Generate C# code from C# model and adding custom classes


    Once an internal C# model is built, we are calling the T4 text template toolkit engine for each group of types : Enumerations, Structures, Interfaces, Functions. Those classes are then integrated in several VS project, with some custom code added and some non generated core classes.

    The generated C# interop code


    Meaning that for each assembly, each namespace, there will be an Enumerations.cs, Structures.cs, Interfaces.cs and Functions.cs files generated.

    For each types, there is a custom mapping done:
    • For enums, the mapping is straightforward, resulting in an almost one-to-one mapping
    • For structures, the mapping is quite straightforward, resulting in an almost one-to-one mapping for most of the types. Although there are a couple of case where the mapping need to generate some marshalling code, essentially when there is a bool in the struct, or when there is a string pointer, or a fixed array of struct inside a struct.
    For example, one of the most complex mapping for a structure is generated like this:

    /// <summary> 
    /// Describes the blend state.
    /// </summary>
    /// <remarks>
    /// These are the default values for blend state.StateDefault ValueAlphaToCoverageEnableFALSEIndependentBlendEnableFALSERenderTarget[0].BlendEnableFALSERenderTarget[0].SrcBlendD3D11_BLEND_ONERenderTarget[0].DestBlendD3D11_BLEND_ZERORenderTarget[0].BlendOpD3D11_BLEND_OP_ADDRenderTarget[0].SrcBlendAlphaD3D11_BLEND_ONERenderTarget[0].DestBlendAlphaD3D11_BLEND_ZERORenderTarget[0].BlendOpAlphaD3D11_BLEND_OP_ADDRenderTarget[0].RenderTargetWriteMaskD3D11_COLOR_WRITE_ENABLE_ALL Note that D3D11_BLEND_DESC is identical to {{D3D10_BLEND_DESC1}}.If the driver type is set to <see cref="SharpDX.Direct3D.DriverType.Hardware"/>, the feature level is set to less than or equal to <see cref="SharpDX.Direct3D.FeatureLevel.Level_9_3"/>, and the pixel formatofthe render target is set to <see cref="SharpDX.DXGI.Format.R8G8B8A8_UNorm_SRgb"/>, DXGI_FORMAT_B8G8R8A8_UNORM_SRGB, or DXGI_FORMAT_B8G8R8X8_UNORM_SRGB, the display device performs the blend in standard RGB (sRGB) space and not in linear space. However, if the feature level is set to greater thanD3D_FEATURE_LEVEL_9_3, the display device performs the blend in linear space.
    /// </remarks>
    /// <unmanaged>D3D11_BLEND_DESC</unmanaged>
    public partial struct BlendDescription {

    /// <summary>
    /// Determines whether or not to use alpha-to-coverage as a multisampling technique when setting a pixel to a rendertarget.
    /// </summary>
    /// <unmanaged>BOOL AlphaToCoverageEnable</unmanaged>
    public bool AlphaToCoverageEnable {
    get {
    return (_AlphaToCoverageEnable!=0)?true:false;
    }
    set {
    _AlphaToCoverageEnable = value?1:0;
    }
    }
    internal int _AlphaToCoverageEnable;

    /// <summary>
    /// Set to TRUE to enable independent blending in simultaneous render targets. If set to FALSE, only the RenderTarget[0] members are used. RenderTarget[1..7] are ignored.
    /// </summary>
    /// <unmanaged>BOOL IndependentBlendEnable</unmanaged>
    public bool IndependentBlendEnable {
    get {
    return (_IndependentBlendEnable!=0)?true:false;
    }
    set {
    _IndependentBlendEnable = value?1:0;
    }
    }
    internal int _IndependentBlendEnable;

    /// <summary>
    /// An array of render-target-blend descriptions (see <see cref="SharpDX.Direct3D11.RenderTargetBlendDescription"/>); these correspond to the eight rendertargets that can be set to the output-merger stage at one time.
    /// </summary>
    /// <unmanaged>D3D11_RENDER_TARGET_BLEND_DESC RenderTarget[8]</unmanaged>
    public SharpDX.Direct3D11.RenderTargetBlendDescription[] RenderTarget {
    get {
    if (_RenderTarget == null) {
    _RenderTarget = new SharpDX.Direct3D11.RenderTargetBlendDescription[8];
    }
    return _RenderTarget;
    }
    }
    internal SharpDX.Direct3D11.RenderTargetBlendDescription[] _RenderTarget;

    // Internal native struct used for marshalling
    [StructLayout(LayoutKind.Sequential, Pack = 0 )]
    internal unsafe partial struct __Native {
    public int _AlphaToCoverageEnable;
    public int _IndependentBlendEnable;
    public SharpDX.Direct3D11.RenderTargetBlendDescription RenderTarget;
    SharpDX.Direct3D11.RenderTargetBlendDescription __RenderTarget1;
    SharpDX.Direct3D11.RenderTargetBlendDescription __RenderTarget2;
    SharpDX.Direct3D11.RenderTargetBlendDescription __RenderTarget3;
    SharpDX.Direct3D11.RenderTargetBlendDescription __RenderTarget4;
    SharpDX.Direct3D11.RenderTargetBlendDescription __RenderTarget5;
    SharpDX.Direct3D11.RenderTargetBlendDescription __RenderTarget6;
    SharpDX.Direct3D11.RenderTargetBlendDescription __RenderTarget7;
    // Method to free native struct
    internal unsafe void __MarshalFree()
    {
    }
    }

    // Method to marshal from native to managed struct
    internal unsafe void __MarshalFrom(ref __Native @ref)
    {
    this._AlphaToCoverageEnable = @ref._AlphaToCoverageEnable;
    this._IndependentBlendEnable = @ref._IndependentBlendEnable;
    fixed (void* __to = &this.RenderTarget[0]) fixed (void* __from = &@ref.RenderTarget) SharpDX.Utilities.CopyMemory((IntPtr) __to, (IntPtr) __from, 8*sizeof ( SharpDX.Direct3D11.RenderTargetBlendDescription));
    }
    // Method to marshal from managed struct tot native
    internal unsafe void __MarshalTo(ref __Native @ref)
    {
    @ref._AlphaToCoverageEnable = this._AlphaToCoverageEnable;
    @ref._IndependentBlendEnable = this._IndependentBlendEnable;
    fixed (void* __to = &@ref.RenderTarget) fixed (void* __from = &this.RenderTarget[0]) SharpDX.Utilities.CopyMemory((IntPtr) __to, (IntPtr) __from, 8*sizeof ( SharpDX.Direct3D11.RenderTargetBlendDescription));

    }
    }

    • For Interfaces the mapping is quite complex, because it is necessary to handle lost of different cases:
      • Optionnal structure in input
      • Optionnal parameters
      • Output an array of interface
      • Perform some custom marshaling (for example, with the previous BlendDescription structure)
      • Generating properties for methods that are property elligible
      • ...etc.
    For example, the method using the BlendDescription is like this:
    /// <summary> 
    /// Create a blend-state object that encapsules blend state for the output-merger stage.
    /// </summary>
    /// <remarks>
    /// An application can create up to 4096 unique blend-state objects. For each object created, the runtime checks to see if a previous object has the same state. If such a previous object exists, the runtime will return a pointer to previous instance instead of creating a duplicate object.
    /// </remarks>
    /// <param name="blendStateDescRef">Pointer to a blend-state description (see <see cref="SharpDX.Direct3D11.BlendDescription"/>).</param>
    /// <param name="blendStateRef">Address of a pointer to the blend-state object created (see <see cref="SharpDX.Direct3D11.BlendState"/>).</param>
    /// <returns>This method returns E_OUTOFMEMORY if there is insufficient memory to create the blend-state object. See {{Direct3D 11 Return Codes}} for other possible return values.</returns>
    /// <unmanaged>HRESULT CreateBlendState([In] const D3D11_BLEND_DESC* pBlendStateDesc,[Out, Optional] ID3D11BlendState** ppBlendState)</unmanaged>
    public SharpDX.Result CreateBlendState(ref SharpDX.Direct3D11.BlendDescription blendStateDescRef, out SharpDX.Direct3D11.BlendState blendStateRef){
    unsafe {
    SharpDX.Direct3D11.BlendDescription.__Native blendStateDescRef_ = new SharpDX.Direct3D11.BlendDescription.__Native();
    blendStateDescRef.__MarshalTo(ref blendStateDescRef_);
    IntPtr blendStateRef_ = IntPtr.Zero;
    SharpDX.Result __result__;
    __result__= (SharpDX.Result)SharpDX.Interop.CalliInt(_nativePointer, 20 * 4, &blendStateDescRef_, &blendStateRef_); 
          &nb
              Making of Ergon 4K PC Intro        
    You are not going to discover any fantastic trick here, the intro itself is not an outstanding coding performance, but I always enjoy reading the making of other intros, so It's time to take some time to put this on a paper!

    What is Ergon? It's a small 4k intro (meaning 4096 byte executable) that was released at the 2010 Breakpoint demoparty (if you can't run it on your hardware, you can still watch it on youtube), which surprisingly was able to finish to the 3rd place! I did the coding, design and worked also on the music with my friend ulrick.

    That was a great experience even if I didn't expect to work on this production at the beginning of the year... but at the end of January, when BP2010 was announced and supposed to be the last one, I was motivated to go there, and why not, release a 4k intro! One month and a half later, the demo was almost ready... wow, 3 weeks before the party, first time to finish something so ahead an event! But yep, I was able to work on it on part time during the week (and the night of course)... But when I started on it, I had no idea where this project would bring me to... or even what kind of 3D API I had to start from doing this intro!

    OpenGL, DirectX 9, 10 or 11?


    At FRequency, xt95 is mainly working in OpenGL, mostly due to the fact that he is a linux user. All our previous intros were done using OpenGL, although I did provide some help on some intros, bought OpenGL books few years ago... I'm not a huge fan of the OpenGL C API, but most importantly, from my short experience on this, I was always able to better strip down DirectX code size than OpenGL code... At that time, I was also working a bit more on DirectX API... I even bought a 5770 ATI earlier to be able to play with D3D11 Compute Shader api... I'm also mostly a windows user... DirectX has a very well integrated documentation in Visual Studio, a good SDK, lots of samples inside, a cleaner API (more true on recent D3D10/D3D11), some cool tools like PIX to debug shaders... and thought also that programming on DirectX on windows might reduce the risk to get some incompatibilities between NVidia and ATI graphics card (although, I found that, at least with D3D9, this is not always true...).

    So ok, DirectX was selected... but which version? I started my first implementation with D3D10. I know that the code is much more verbose than D3D9 and OpenGL2.0, but I wanted to practice it a bit more the somehow "new" API than just reading a book about it. I was also interested to plug some text in the demo and tried an integration with latest Direct2D/DirectWrite API.

    Everything went well at the beginning with D3D10 API. The code was clean, thanks to the thin layer I developed around DirectX to make the coding experience much closer to what I use to have in C# with SlimDx for example. The resulting C++ code was something like this :
    //
    // Set VertexBuffer for InputAssembler Stage
    device.InputAssembler.SetVertexBuffers(screen.vertexBuffer, sizeof(VertexDataOffline));

    // Set TriangleList PrimitiveTopology for InputAssembler Stage
    device.InputAssembler.SetPrimitiveTopology(PrimitiveTopology::TriangleStrip);

    // Set VertexShader for the current Pass
    device.VertexShader.Set(effect.vertexShader);
    Very pleasant to develop with it, but because I wanted to test D2D1, I switched to D3D10.1 which can be configured to run on D3D10 hardware (with the feature level thing)... So I also started to slightly wrap up the Direct2D API and was able to produce very easily some really nice text... but wow... the code was a bit too large for a 4k (but would be perfect for a 64k).

    Then during this experiment phase, I tried the D3D11 API with the Compute Shader thing... and found that the code is much more compact than D3D10 if you are performing some kind of... for example, raymarching... I didn't compare code size, but I suspect the code to be able to compete with its D3D9 counterpart (although, there is a downside in D3D11 : if you can afford a real D3D11 hardware, a compute shader can directly render to the screen buffer... otherwise, using the D3D11 Compute shader with features level 10, you have to copy the result from one resource to another... which might hit the size benefit...).

    I was happy to see that the switch to D3D11 was easy, with some continuity from D3D10 on the API "look & feel"... Although I was disappointed to learn that working this D3D11 and D2D1 was not straightforward because D2D1 is only compatible with D3D10.1 API (which you can run with feature level 9.0 to 10), forcing to initialize and maintain two devices (one for D3D10.1 and one for D3D11), playing with DXGI shared resource between the devices... wow, lots of work, lots of code... and of course, out of question for a 4k...

    So I tried... a plain old good D3D9... and that was of course much compact in size than their D3D10 counterpart... So for around two weeks in February, I played with those various API while implementing some basic scene for the intro.I just had a bad surprise when releasing the intro, because lots of people were not able to run it : weird because I was able to test it on several NVidias and at least my ATI 5770... I didn't expect D3D9 to be so sensitive to that, or at least, a bit less sensitive than OpenGL... but I was wrong.

    Raymarching optimization


    I decided to go for an intro using the raymarching algorithm that was more likely to be able to deliver a "fat" content in a tiny amount of code. Although, the raymarching stuff was already a bit in the "retired", after the fantastic intros released earlier in 2009 (Elevated - not really a raymarching intro but soo impressive!, Sult, Rudebox, Muon-Baryon...etc). But I didn't have enough time to explore a new effect and was not even confident to be able to find anything interesting at that time... so... ok, raymarching.

    So for one week, after building a 1st scene, I spent my time to try to optimize the raymarching algo. There was an instructive thread on pouet about this : "So, what do distance field equations look like? And how do we solve them?". I tried to implement some trick like...
    1. Generate grid on the vertex shader (with 4x4 pixels for example), to precompute a raw view of the scene, storing the minimal distance step to go before hitting a surface... let the pixel shader to get those interpolate distances (multiplied by a small reduction factor like .9f) and perform some fine grained raymarching with fewer iterations
    2. Generate a pre-rendered 3D volume of the scene at a much lower density (like 96x96x96) and use this map to navigate in the distance fields while still performing some "sphere tracing" refinement if needed
    3. I tried also somekind of level of detail on the scene : for example, instead of having a texture lookup (for the "bump mapping") for each step during the raymarching, allow the raymarcher to use a simplified analytical surface scene and switch to the more detailled one for the last step
    Well, I have to admit that all those techniques were not really clever in anyway... and the result was matching the lack of this cleverness! None of them provide a significant speed optimization compare to the code size hit they were generated.

    So after one week of optimization, well, I just went to a basic raymarcher algo. The shader was developed under Visual C++, integrated in the project (thanks to NShader syntax highlighting). I did a small C# tool to strip the shader comments, remove unnecessary spaces... integrated in the build (pre-build events in VC++), It's really enjoyable to work with this toolchain.

    Scenes design


    For the scenes, I decided to use the same kind of technique used in the Rudebox 4k intro : Leveraging more on the geometry and lights, but not on the materials. That made the success of the rudebox and I was motivated to build some complex CSG with boolean operations on basic elements (box, sphere...etc.). The nice thing about this approach is that It avoids to use inside the ISO surface anykind of if/then/else for determining the material... just letting the lights properly set in the scene might do the work. Yep, indeed, rudebox is for example a scene with say, a white material for all the objects. What makes the difference is the position of lights in the scene, their intensity...etc. Ergon used the same trick here.

    I spent around two to three weeks to build the scenes. I ended up with 4 scenes, each one quite cool on their own, with a consistent design among them. One of the scene was using the fonts to render a wall of text in raymarching.

    Because I'm not sure that I will be able to use those scenes, well, I'm going to post their screenshot here!

    The 1st scene I developed during my D3D9/D3D10/D3D11 API experiments was a massive tentacle model coming from a balckhole. All the tentacles were moving around a weird cutted sphere, with a central "eye"... I was quite happy about this scene that had a unique design. From the beginning, I wanted to add some post-processing, to enhance the visuals, and to make them a bit different from other raymarching scene... So I went with a simple post-processing that was performing some patterns on the pixels, adding a radial blur to produce some kind of "ghost rays" coming out from the scene, making the corners darker, and adding a small flickering the more you go to the corners. Well, only this piece of code was already taking a scene on its own, but that was the price to have a genuine ambiance, so...

    The colors and theming was almost settled from the beginning... I'm a huge fan of warm colors!

    The 2nd scene was using a font rendering coupling with the raymarcher.... a kind of flying flag, with the logo FRequency appearing from left to right with a light on it... (I will probably release those effects on pouet just for the record...), that was also a fresh use of raymarching... didn't see anything like this in latest 4k production, so, I was expecting to insert this text in the 4k, as It's not so common... The code to use the d3d font was not too fat... so I was still confident to be able to use those 2 scenes.

    After that, I was looking for some nasty objects... so for the 3rd scene, I tried to randomly play with some weird functions and ended up with a kind of "raptor" creature... I wanted also to use a weird generated texture I found few month ago, that was perfect for it.

    Finally, I wanted to use the texture to make a kind of lava sea with a moving snake on it... that was the last scene I coded (and of course, 2 others scenes that are too ugly to show here! :) ).


    We also started at that time, in February, to work on the music, and as I explained in my earlier posts, we used 4klang synth for the intro. But making all those scenes with a music prototype, the "crinklered" compressed exe was more around 5ko... even If the shader code was already optimized in size, using some kind of preprocessor templating (like in rudebox or receptor). The intro was of course laking a clear direction, there was no transitions between the scenes... and most importantly, It was not possible to fit all those scenes in 4k, while expecting the music to grow a little bit more in the final exe...

    The story of the Worm-Lava texture


    Last year, around November, while I was playing with several perlin's like noise, I found an interesting variation using perlin noise and the marble-cosine effect that was able to represents some kind of worms, quite freaking ugly in some way, but that was a unique texture effect!

    (Click to enlarge, lots of details in it!)

    This texture was primarily developed in C# but the code was quite straightforward to port in a texture shader... Yep, that's probably an old trick with D3D9 to use the function D3DXFillTextureTX to directly fill a texture from a shader with a single line of code... Why using this? Because It was the only way to get a noise() function accessible from a shader, without having to implement it... As weird as it may sounds, the HLSL perlin noise() function is not accessible outside a texture shader. A huge drawback of this method is also that the shader is not a real GPU shader, but is instead computed on the CPU... that explain why ergon intro is taking so long to generate the texture at the beginning (with a 1280x720 texture resolution for example).

    So how does look this texture shader in order to generate this texture?
    // -------------------------------------------------------------------------
    // worm noise function
    // -------------------------------------------------------------------------
    #define ty(x,y) (pow(.5+sin((x)*y*6.2831)/2,2)-.5)
    #define t2(x,y) ty(y+2*ty(x+2*noise(float3(cos((x)/3)+x,y,(x)*.1)),.3),.7)
    #define tx(x,y,a,d) ((t2(x, y) * (a - x) * (d - y) + t2(x - a, y) * x * (d - y) + t2(x, y - d) * (a - x) * y + t2(x - a, y - d) * x * y) / (a * d))

    float4 x( float2 x : position, float2 y : psize) : color {
    float a=0,d=64;
    // Modified FBM functions to generate a blob texture
    for(;d>=2;d/=2)
    a += abs(tx(x.x*d,x.y*d,d,d)/d);
    return a*2;
    }

    The tx macro is basically applying a tiling on the noise.
    The core t2 and ty macros are the one that are able to generate this "worm-noise". It's in fact a tricky combination of the usual cosine perlin noise. Instead of having something like cos(x + noise(x,y)), I have something like special_sin( y + special_sin( x + noise(cos(x/3)+x,y), power1), power2), with special_sin function like ((1 + sin(x*power*2*PI))/2) ^ 2

    Also, don't be afraid... this formula didn't came out of my head like this... that was clearly after lots of permutations from the original function, with lots of run/stop/change_parameters steps! :D

    Music and synchronization


    It took some time to build the music theme and to be satisfied with it... At the beginning, I let ulrick making a first version of the music... But because I had a clear view of the design and direction, I was expecting a very specific progression in the tune and even in the chords used... That was really annoying for ulrick (excuse-me my friend!), as I was very intrusive in the composition process... At some point, I ended up in making a 2 pattern example of what I wanted in terms of chords and musical ambiance... and ulrick was kind enough to take this sample pattern and clever to add some intro's musical feeling in it. He will be able to talk about this better than me, so I'll ask him if he can insert some small explanation here!

    ulrick here: « working with @lx on this prod was a very enjoyable job. I started a music which @lx did not like very much, it did not reflect the feelings that @lx wanted to give through the Ergon. He thus composed a few patterns using a very emotional musical scale. I entered into the music very easily and added my own stuffs. For the anecdote, I added a second scale to the music to allow for a clearer transition between the first and second parts of the Ergon. After doing so, we realized that our music actually used the chromatic scale on E »

    The synchronization was the last part of the work in the demo. I first used the default synchronization mechanism from the 4klang... but I was lacking some features like, if the demo is running slowly, I needed to know exactly where I was... Using plain 4klang sync, I was missing some events on slow hardware, even preventing the intro to switch between the scenes, because the switching event was missed by the rendering loop!

    So I did my own small synchronization based on regular events of the snare and a reduce view of the sample patterns for this particular events. This is the only part of the intro that was developed in x86 assembler in order to keep it as small as possible.

    The whole code was something like this :
    static float const_time = 0.001f;
    static int SAMPLES_PER_DRUMS = SAMPLES_PER_TICK*16;
    static int SAMPLES_PER_DROP_DRUMS = SAMPLES_PER_TICK*4;
    static int SMOOTHSTEP_FACTOR = 3;

    static unsigned char drum_flags[96] = {
    // pattern n° time z.z sequence
    1,1,1,1, // pattern 0 0 0 0
    1,1,1,1, // pattern 1 7,384615385 4 1
    0,0,0,0, // pattern 2 14,76923077 8 2
    0,0,0,0, // pattern 3 22,15384615 12 3
    0,0,0,0, // pattern 4 29,53846154 16 4
    0,0,0,0, // pattern 5 36,92307692 20 5
    0,0,0,0, // pattern 6 44,30769231 24 6
    0,0,0,0, // pattern 7 51,69230769 28 7
    0,0,0,1, // pattern 8 59,07692308 32 8
    0,0,0,1, // pattern 8 66,46153846 36 9
    1,1,1,1, // pattern 9 73,84615385 40 10
    1,1,1,1, // pattern 9 81,23076923 44 11
    1,1,1,1, // pattern 10 88,61538462 48 12
    0,0,0,0, // pattern 11 96 52 13
    0,0,0,0, // pattern 2 103,3846154 56 14
    0,0,0,0, // pattern 3 110,7692308 60 15
    0,0,0,0, // pattern 4 118,1538462 64 16
    0,0,0,0, // pattern 5 125,5384615 68 17
    0,0,0,0, // pattern 6 132,9230769 72 18
    0,0,0,0, // pattern 7 140,3076923 76 19
    0,0,0,1, // pattern 8 147,6923077 80 20
    1,1,1,1, // pattern 12 155,0769231 84 21
    1,1,1,1, // pattern 13 162,4615385 88 22
    };

    // Calculate time, synchro step and boom shader variables
    __asm {
    fild dword ptr [time] // st0 : time
    fmul dword ptr [const_time] // st0 = st0 * 0.001f
    fstp dword ptr [shaderVar.x] // shaderVar.x = time * 0.001f
    mov eax, dword ptr [MMTime.u.sample]
    cdq
    sub eax, SAMPLES_PER_TICK*8
    jae not_first_drum
    xor eax,eax
    not_first_drum:
    idiv dword ptr [SAMPLES_PER_DRUMS] // eax = drumStep , edx = remainder step
    mov dword ptr [drum_step], eax
    fild dword ptr [drum_step]
    fstp dword ptr [shaderVar.z] // shaderVar.z = drumStep

    not_end: cmp byte ptr [eax + drum_flags],0
    jne no_boom

    mov eax, SAMPLES_PER_TICK*4
    sub eax,edx
    jae boom_ok
    xor eax,eax
    boom_ok:
    mov dword ptr [shaderVar.y],eax
    fild dword ptr [shaderVar.y]
    fidiv dword ptr [SAMPLES_PER_DROP_DRUMS] // st0 : boom
    fild dword ptr [SMOOTHSTEP_FACTOR] // st0: 3, st1-4 = boom
    fsub st(0),st(1) // st0 : 3 - boom , st1-3 = boom
    fsub st(0),st(1) // st0 : 3 - boom*2, st1-2 = boom
    fmul st(0),st(1) // st0 : boom * (3-boom*2), st1 = boom
    fmulp st(1),st(0)
    fstp dword ptr [shaderVar.y]
    no_boom:
    };

    That was smaller then what I was able to do with pure 4klang sync... with the drawback that the sync was probably too simplistic... but I couldn't afford more code for the sync... so...

    Final mixing


    Once the music was almost finished, I spent a couple of days to work on the transitions, sync, camera movements. Because It was not possible to fit the 4 scenes, I had to mix the scene 3 (the raptor) and 4 (the snake and the lava sea), found a way to put a transition through a "central brain". Ulrick wanted to put a different music style for the transition, I was not confident with it... until I put the transition in action, letting the brain collapsed while the space under it was digging all around... and the music was fitting very well! cool!


    I did also use a simple big shader for the whole intro, with some if (time < x) then scene_1 else scene_2...etc. I didn't expect to do this, because this is hurting the performance in the pixel shader to do this kind of branch processing... But I was really running out of space here and the only solution was in fact to use a single shader with some repetitive code. Here is an excerpt from the shader code : You can see how scene and camera management has been done, as well as for lights. This part was compressing quite well due to its repetitive pattern.
    // -------------------------------------------------------------------------

    // t3

    // Helper function to rotate a vector. Usage :

    // t3(mypoint.xz, .7); <= rotate mypoint around Y axis with .7 radians
    // -------------------------------------------------------------------------
    float2 t3(inout float2 x,float y){
    return x=x*cos(y)+sin(y)*float2(-x.y,x.x);
    }

    // -------------------------------------------------------------------------
    // v : main raymarching function
    // -------------------------------------------------------------------------
    float4 v(float2 x:texcoord):color{
    float a=1,b=0,c=0,d=0,e=0,f=0,i;
    float3 n,o,p,q,r,s,t=0,y;
    int w;
    r=normalize(float3(x.x*1.25,-x.y,1)); // ray
    x = float2(.001,0); // epsilon factor

    // Scene management
    if (z.z<39) {
    w = (z.z<10)?0:(z.z>26)?3+int(fmod(z.z,5)):int(fmod(z.z,3));

    //w=4;
    if (w==0) { p=float3(12,5+30*smoothstep(16,0,z.x),0);t3(r.yz,1.1*smoothstep(16,0,z.x));t3(r.xz,1.54); }
    if (w==1) { p=float3(-13,4,-8);t3(r.yz,.2);t3(r.xz,-.5);t3(r.xy,sin(z.x/3)/3); }
    if (w==2) { p=float3(0,8.5,-5);t3(r.yz,.2);t3(r.xy,sin(z.x/3)/5); }
    if (w==3) {
    p=float3(13+sin(z.x/5)*3,10+3*sin(z.x/2),0);
    t3(r.yz, sin(z.x/5)*.6);
    t3(r.xz, 1.54+z.x/5);
    t3(r.xy, cos(z.x/10)/3);
    t3(p.xz,z.x/5);
    }

    if (w == 4) {
    p=float3(30+sin(z.x/5)*3,8,0);
    t3(r.yz, sin(z.x/5)/5);
    t3(r.xz, 1.54+z.x/3);
    t3(r.xy, sin(z.x/10)/3);
    t3(p.xz,z.x/3);
    }

    if (w > 4) {
    p=float3(4.5,25+10*sin(z.x/3),0);
    t3(r.yz, 1.54*sin(z.x/5));
    t3(r.xz, .7+z.x/2);
    t3(r.xy, sin(z.x/10)/3);
    t3(p.xz,z.x/2);
    }
    } else if (z.z<52) {
    p=float3(20,20,0);
    t3(r.yz, .9);
    t3(r.xz, 1.54+z.x/4);
    t3(p.xz,z.x/4);
    } else if (z.z<81) {
    w = int(fmod(z.z,3));
    if (w==0 ) {
    p=float3(40+sin(z.x/5)*3,8,0);
    t3(r.yz, sin(z.x/5)/5);
    t3(r.xz, 1.54+z.x/3);
    t3(r.xy, sin(z.x/10)/3);
    t3(p.xz,z.x/3);
    }
    if (w==1 ) {
    p=float3(-10,30,0);
    t3(r.yz, 1.1);
    t3(r.xz, z.x/4);
    }
    if (w==2 ) {
    p=float3(25+sin(z.x/5)*3,10+3*sin(z.x/2),0);
    t3(r.yz, sin(z.x/5)/2);
    t3(r.xz, 1.54+z.x/5);
    t3(r.xy, cos(z.x/10)/3);
    t3(p.xz,z.x/5);
    }
    } else {
    p=float3(0,4,8);
    t3(r.yz,sin(z.x/5)/5);
    t3(r.xy,cos(z.x/4)/2);
    t3(r.xz,-1.54+smoothstep(0,4,z.x-155)*(z.x-155)/3);
    }


    // Boom effect on camera
    p.x+=z.y*sin(111*z.x)/4;

    // Lights
    static float4 l[6] = {{.7,.2,0,2},{.7,0,0,3},{.02,.05,.2,7},
    {(4+10*step(24,z.z))*cos(z.x/5),-5,(4+10*step(24,z.z))*sin(z.x/5),0},
    {-30+5*sin(z.x/2),8,6+10*sin(z.x/2),0},
    {25,25,10,0}
    };

    Compression statistics


    Final compression results are given in the following table:


    So to summarize, total exe size is 4070 bytes, and is composed of :
    • Synth code + music data is taking around 35% of the total exe size = 1461 bytes
    • Shader code is taking 36% = 1467 bytes
    • Main code + non shader data is 14% = 549 bytes
    • PE + crinkler decoder + crinkler import is 15% = 593 bytes


    The intro was finished around the 13 march 2010, well ahead BP2010. So that was damn cool... I spent the rest of my time until BP2010 to try to develop a procedural 4k gfx, using D3D11 compute shaders, raymarching and a Global Illumination algorithm... but the results (algo finished during the party) disappointed me... And when I saw the fantastic Burj Babil by Psycho, he was right about using a plain raymarcher without any complicated true light management... a good "basic" raymarching algo, with some tone mapping finetune was much more relevant here!

    Anyway, my GI experiment on the compute shader will probably deserve an article here.



    I really enjoyed to make this demo and to see that ergon was able to make it in the top 3... after seeing BP2009, I was not expecting at all the intro to be in the top 3!... although I know that the competition this year was far much easier than the previous BP!

    Anyway, that was nice to work with my friend ulrick... and to contribute to the demoscene with this prod. I hope that I will be able to keep on working on the demos like this... I still have lots of things to learn, and that's cool!
              Democoding, tools coding and coding scattering        
    Not so much post here for a while... So I'm going to just recap some of the coding work I have done so far... you will notice that It's going in lots of direction, depending on opportunities, ideas, sometimes not related to democoding at all... not really ideal when you want to release something! ;)

    So, here are some directions I have been working so far...


    C# and XNA

    I tried to work more with C#, XNA... looking for an opportunity to code a demo in C#... I even started a post about it few months ago, but leaving it in a draft state. XNA is really great, but I had some bad experience with it... I was able to use it without requiring a full install but while playing with model loading, I had a weird bug called the black model bug. Anyway, I might come back to C# for DirectX stuff... SlimDx is for example really helpful for that.

    A 4k/64k softsynth

    I have coded a synth dedicated to 4k/64k coding. Although, right now, I only have the VST and GUI fully working under Renoise.. but not yet the asm 4k player! ;)



    The main idea was to build a FM8/DX7 like synth, with exactly the same output quality (excluding some fancy stuff like the arpegiator...). The synth was developed in C# using vstnet, but must be more considered as a prototype under this language... because the asm code generated by the JIT is not really good when it comes to floating point calculation... anyway, It was really good to develop under this platform, being able to prototype the whole thing in few days (and of course, much more days to add rich GUI interaction!).

    I still have to add a sound library file manager and the importer for DX7 patch..... Yes, you have read it... my main concern is to provide as much as possible a tons of ready-to-use patches for ulrick (our musician at FRequency)... Decoding the DX7 patch is well known around the net... but the more complex part was to make it decode like the FM8 does... and that was tricky... Right now, every transform functions are in an excel spreadsheet, but I have to code it in C# now!

    You may wonder why developing the synth in C# if the main target is to code the player in x86 asm? Well, for practical reasons : I needed to quickly experiment the versatility of the sounds of this synth and I'm much more familiar with .NET winform to easily build some complex GUI. Although, I have done the whole synth with 4k limitation in mind... especially about data representation and complexity of the player routine.

    For example, for the 4k mode of this synth, waveforms are strictly restricted to only one : sin! No noise, no sawtooth, no square... what? A synth without those waveform?.... but yeah.... When I looked back at DX7 synth implem, I realized that they were using only a pure "sin"... but with the complex FM routing mechanism + the feedback on the operators, the DX7 is able to produce a large variety of sounds ranging from strings, bells, bass... to drumkits, and so on...

    I did also a couple of effects, mainly a versatile variable delay line to implement Chorus/Flanger/Reverb.

    So basically, I should end up with a synth with two modes :
    - 4k mode : only 6 oscillators per instrument, only sin oscillators, simple ADSR envelope, full FM8 like routing for operators, fixed key scaling/velocity scaling/envelope scaling. Effects per instrument/global with a minimum delay line + optional filters. and last but not least, polyphony : that's probably the thing I miss the most in 4k synth nowadays...
    - 64k mode : up to 8 oscillators per instrument, all FM8 oscillators+filters+WaveShaping+RingModulation operators, 64 steps FM8's like envelope, dynamic key scaling/velocity scaling/envelope scaling. More effects, with better quality, 2 effect //+serial line per instrument. Additional effects channel to route instrument to the same effects chain. Modulation matrix.

    The 4k mode is in fact restricting the use of the 64k mode, more at the GUI level. I'm currently targeting only the 4k mode, while designing the synth to make it ready to support 64k mode features.

    What's next? Well, finish the C# part (file manager and dx7 import) and starting the x86 asm player... I just hope to be under 700 compressed byte for the 4k player (while the 64k mode will be written in C++, with an easier limitation around 5Ko of compressed code) .... but hey, until It's not coded... It's pure speculation!.... And as you can see, the journey is far from finished! ;)

    Context modeling Compression update

    During this summer, I came back to my compression experiment I did last year... The current status is quite pending... The compressor is quite good, sometimes better than crinkler for 4k... but the prototype of the decompressor (not working, not tested....) is taking more than 100 byte than crinkler... So in the end, I know that I would be off more than 30 to 100 byte compared to crinkler... and this is not motivating me to finish the decompressor and to get it really running.

    The basic idea was to take the standard context modeling approach from Matt Mahoney (also known as PAQ compression, Matt did a fantastic job with his research, open source compressor....by the way), using dynamic neural network with an order of 8 (8 byte context history), with the same mask selection approach than crinkler + some new context filtering at the bit level... In the end, the decompressor is using the FPU to decode the whole thing... as it needs ln2() and pow2() functions... So during the summer, I though using another logistic activation function to get rid of the FPU : the standard sigmoid used in the neural network with a base 2 is 1/(1+2^-x)), so I found something similar with y = (x / (1 + |x|) + 1) /2 from David Elliot (some references here). I didn't have any computer at this time to test it, so I spent few days to put some math optimization on it, while calculating the logit function (the inverse of this logistic function).

    I came back to home very excited to test this method... but I was really disappointed... the function had a very bad impact on compression ratio by a factor of 20%, in the end, completely useless!

    If by next year, I'm not able to release anything from this.... I will put all this work open source, at least for educational purposes... someone will certainly be clever than me on this and tweak the code size down!

    SlimDx DirectX wrapper's like in C++

    Recall that for the ergon intro, I have been working with a very thin layer around DirectX to wrap enums/interfaces/structures/functions. I did that around D3D10, a bit of D3D11, and a bit of D3D9 (which was the one I used for ergon). The goal was to achieve a DirectX C# like interface in C++. While the code has been coded almost entirely manually, I was wondering If I could not generate It directly from DirectX header files...

    So for the last few days, I have been a bit working on this... I'm using boost::wave as the preprocessor library... and I have to admit that the C++ guy from boost lost their mind with templates... It's amazing how they did something simple so complex with templates... I wanted to use this under a C++/Cli managed .NET extension to ease my development in C#, but I end up with a template error at link stage... an incredible error with a line full of concatenated template, even freezing visual studio when I wanted to see the errors in the error list!

    Template are really nice, when they are used not too intensively... but when everything is templatized in your code, It's becoming very hard to use fluently a library and It's sometimes impossible to understand the template error, when this error is more than 100 lines full of cascading template types!

    Anyway, I was able to plug this boost::wave in a native dll, and calling it from a C# library... next step is to see how much I can get from DirectX header files to extract a form of IDL (Interface Definition Language). If I cannot get something relevant in the next week, I might postpone this task when I won't have anything more important to do! The good thing is for example for D3D11 headers, you can see that those files were auto-generated from a mysterious... d3d11.idl file...used internally at Microsoft (although It would have been easier to get directly this file!)... so It means that the whole header is quite easy to parse, as the syntax is quite systematic.

    Ok, this is probably not linked to intros... or probably only for 64k.... and I'm not sure I will be able to finish it (much like rmasm)... And this kind of work is keeping me away from directly working with DirectX, experimenting rendering techniques and so on... Well, I have to admit also that I have been more attracted for the past few years to do some tools to enhance coding productivity (not necessary only mine)... I don't like to do too much things manually.... so everytime there is an opportunity to automatize a process, I can't refrain me to make it automatic! :D


    AsmHighlighter and NShader next update

    Following my bad appetite for tools, I need to make some update to AsmHighlighter and NShader, to add some missing keywords, patch a bug, support for new VS2010 version... whatever... When you release this kind of open source project, well, you have to maintain them, even if you don't use them too much... because other people are using them, and are asking for improvements... that's the other side of the picture...

    So because I have to maintain those 2 projects, and they are in fact sharing logically more than 95% of the same code, I have decided to merge them into a single one... that will be available soon under codeplex as well. That will be easier to maintain, ending with only one project to update.


    The main features people are asking is to be able to add some keywords easily and to map file extensions to the syntax highlighting system... So I'm going to generalize the design of the two project to make them more configurable... hopefully, this will cover the main features request...

    An application for Windows Phone 7... meh?

    Yep... I have to admit that I'm really excited by the upcoming Windows Phone 7 metro interface... I'm quite fed up with my iPhone look and feel... and because the development environment is so easy with C#, I have decided to code an application for it. I'm starting with a chromatic tuner for guitar/piano/violins...etc. and it's working quite well, even if I was able to test it only under the emulator. While developing this application, I have learned some cool things about pitch detection algorithm and so on...

    I hope to finish the application around september, and to be able to test it with a real hardware when WP7 will be offcialy launched... and before puting this application on the windows marketplace.

    If this is working well, I would study to develop other applications, like porting the softsynth I did in C# to this platform... We will see... and definitely, this last part is completely unrelated to democoding!


    What's next?

    Well, I have to prioritize my work for the next months:
    1. Merge AsmHighlighter and NShader into a single project.
    2. Play a bit for one week with DirectX headers to see if I could extract some IDL's like information
    3. Finish the 4k mode of the softsynth... and develop the x86 asm player
    4. Finish the WP7 application
    I still have also an article to write about ergon's making of, not much to say about it, but It could be interesting to write down on a paper those things....

    I need also to work on some new directX effects... I have played a bit with hardware instantiating, compute shaders (with a raymarching with global illumination for a 4k procedural compo that didn't make it to BP2010, because the results were not enough impressive, and too slow to calculate...)... I would really want to explore more about SSAO things with plain polygons... but I didn't take time for that... so yep, practicing more graphics coding should be on my top list... instead of all those time consuming and - sometimes useful - tools!
              NShader 1.1, hlsl, glsl, cg syntax coloring for Visual Studio 2008 & 2010        
    I have recently released NShader 1.1 which adds support for Visual Studio 2010 as well as bugfixes for hlsl/glsl syntax highlighting.

    While this plugin is quite cool to add a basic syntax highlighting for shader languages, It lacks intellisense/completion/error markers to improve the editor experience. I didn't have time to add such a functionality in this release as... I don't really have too much time dedicated to this project... and well, I have so much to learn from effectively practicing a lot more shader languages that I'm fine with this basic syntax highlighting! ;) Is it a huge task to add intellisense? It depends, but concretely, I need to implement for each shading language a full grammar/lexer parser in order to provide a reliable intellisense. Of course, a very basic intellisense would be feasible without this, but I would rather not to use an annoying/unreliable intellisense popup.

    Although, I did some research about existing lexers for shading languages, surprisingly, this is not something you can find easily. For hlsl for example, afaik, there is no bnf grammar published by Microsoft, so If you want to do it yourself, you need to go through the whole HLSL reference documentation and compile yourself a bnf... and that's something I can't afford in my spare time. One could argue that there are some startup code available on the net (O3D from google has an antlr parser/lexer, or a relative simpler one from Christian Schladetsch), agree with that, but well... It still ask a bit more time to patch them, add support for SM5.0, handle correctly preprocessor directives... and so on... After that, I need to integrate it through the language service API, not the worst part. Anyway, If someone is motivated to help me on this, we could come with something. We will follow also if Intelishade is able to resurrect in an open source way... a joint venture would be interesting.

    Also, what's my feedback about migrating VS2008 language service to VS2010? Well, It was pretty straightforward! I did follow the sdk instructions about "Migrating a Legacy Language Service" but It was not fully working as expected. In fact, the only remaining problem was that the WSIX VS2001 installer didn't register automatically the NShader Language Service. I was forced to add manually the pkgdef file (containing registry update for the language service) to the vsix archive. While I was working on the migration to VS2010, I had a look at the new extensibility framework and was surprised to see that the new framework is by far much easier to implement in VS2010. Although, I didn't take the time to migrate NShader to use this new framework, It seems to be pretty easy... also nice thing is that they did provide a compatibility layer for legacy Language Service, so I didn't bother with the new api. But If I had to write a new plugin for VS, I would definitely use the new API, although It would only work with VS2010+ versions...

    One small recurrent disappointment : Visual Studio is still restricting to provide plugins for Express editions. From a "commercial point of view", I understand this restriction, although for the thousands (million maybe?) of people using express edition, this is a huge lack of functionality.I'm sure that allowing community plugins into Express Editions would in fact improve a lot more Visual Studio adoption.

    My next post should be about the making of Ergon at BP2010. I have a couple of things to share about it, but I'm quite lazy at that time to write this post... but It's on the way! ;)
              Coding a 4k intro for Breakpoint 2010        
    I'm going to be quite busy here until the demoparty Breakpoint 2010, as I want to release a small 4k intro for this great event! I didn't plan anything few weeks ago and was slowly working on d3d10/d3d11, working on some effects for a 16k/64k intro to release later this year... but BP2010 is supposed to be the last one and I don't want to miss this experience, first time to go there... and probably last time, so I have decided to go to this party and try to contribute to it, yep!

    I have started to work with d3d10, as i wanted to add some direct2D nice layout text over 2-3 raymarching scenes... but I have found that direct2D is bit too costly for a 4k, specially if you want to avoid the basic white logo... so I have to switch back to a plain d3d9 and forget about some cool text. I will use this direct2D technique for a bigger intro. Due to my lack of investment in all the d3d apis until now, I had to manage the API from the ground up... and it took me a while... To facilitate the d3d10/d3d11 coding experience, I have developed a lightweight c++ wrapper around d3d10/11 APIs, almost exactly in the same way (naming conventions,enums) SlimDx has done the job, and I'm really happy with it. The d3d10/d3d11 API is very clean but due to some verbose API contraints (ugly enums mixed sometimes with some #define, HRESULT to check from every function return, a famous windows programming philosophy), It's really worth to wrap this API around something that transparently hide all those things, rename and rearrange enum/methods/interface. Currently, It's working great with this SlimDx's like wrapper, much easier to program, much easier to read, and It does generate almost exactly the same code than a straight d3d10/d3d11 code, with the capability to remove the HRESULT check and so on... I will probably release this wrapper ( a single .h with a bunch of inline methods) around codeplex later this year, as it should probably help people like me that prefer a syntax much more closer to the C# coding experience than c/c++.

    During my small d3d11 incursion, I have also discovered that the DirectX Effect framework (fx syntax files, with techniques, pass) is no longer available as part of the D3DX runtime! Yep, you need to go to the Utility directory in the DirectX SDK to find that you can compile this framework yourself... It means that for all intros, you can forget about the DirectX Effect framework and program a much lighter Effect framework. One good thing about this change is that I had to better understand the d3d10 philosophy with the constant buffers access and so on... In fact, It's much easier to work directly with constant buffer, and surprisingly, It gives a smaller code. As soon as I'm done with the 4k intro for BP2010, I will publish a small post about this along the SlimDx like wrapper.

    Last thing is that I didn't have time to finish my softsynth, because I was more targeting a 16k/64k, with a more complex synth... so we should go with the great 4klang gopher's synth... but I have two problems with it : 1) ulrick (my old friend, main FRequency musician) is unable to use it under Renoise. It burns its notebook's CPU and we don't know why, as it's a pretty standard core 2 duo intel processor... we did check lost of 4klang/renoise/system parameters without any success. 2) 4klang is great, but, the total code is often close to 900 bytes... I know that some of the top's 4k softsynth are around 500 to 700 bytes, so I'm not completely sure that I will use 4klang. The other idea is to take part of the work I did already for the bigger synth (that is developped in assembler x86), and try to plumb it into a fixed pipeline (and not a stackbased as 4klang)... I'm not sure, but I suspect that I could save a substantial amount of bytes... but still, not sure, and I need to check it... but It's going to be really hard to make it on time for BP... so we'll see...

    Currently, I have only coded one scene for the 4k intro, I'm quite happy with it... but that doesn't make a full intro! I need to add at least 3 scenes, work on the transitions, overall design, synch with synth & so on... even for a 4k, that's a lots of work, moreover when you consider that this is my first prod on my own (I mean, first prod for the PC, after the 3 prods i released 20 years ago on Amiga! ;) ) but It's possible in less than 2 month to do it, so I'll try to do my best!
              Securing Your eCommerce Store with Backups Using WooCommerce        

    When it comes to securing your eCommerce store, security is not just important; it’s a necessity. This means making sure you are running with a valid SSL certificate, have a payment processor that you and your customers can rely on, … More Info

    The post Securing Your eCommerce Store with Backups Using WooCommerce appeared first on Cultura Interactive.


              Zesty Dairy Free "Cheese" Ball        
    Zesty Dairy Free Cheese Ball
    I can't take credit for this one! My Mom and Sister came up with it quite a few years ago to take the place of the cream cheesy ball that Mom always made when we were kids that everyone loved.

    I made this 5 or 6 years back at Christmas and now my husband asks for it every year. It has become a tradition!

    It's very good with veggies or crackers! I love that our family feels like it's a special treat when in reality it is super duper healthy with mainly raw sprouted almonds packed full of whole food nutrition and enzymes that well help you digest those maybe not so much whole food treats! ;)

    I love to make this up just before Christmas and have as part of all our fun dips with our finger foods. I make enough that we have some leftover for New Years too.

    Healthy Cheese Ball without Dairy


    Zesty Dairy Free "Cheese" Ball
    2 cups almonds, soaked for 8 hours
    1 cup sunflower seeds, soaked for 4 hours and sprouted for 4 hours (or just add 3 cups of soaked almonds instead of 2)
    1/2 cup lemon juice
    1/2 cup water
    1/2 cups chopped green onions
    1/4 to 1/2 cup Tahini
    1/4 cup Shoyu
    3 slices sweet onion, cut in small chunks
    1/2 medium green bell pepper, seeded and chopped
    6 Tablespoons fresh chopped parsley
    2 to 3 medium cloves garlic, minced
    pinch of cayenne pepper
    pinch of ginger powder
    pinch of cumin
    pinch of paprika
    1 can crushed pineapple, drained very well
    1 cup finely chopped pecans


    Place the soaked and drained almonds and/or sunflower seeds, lemon juice and water in a food processor and process until smooth. Scoop into a large bowl. Stir in the chopped green onions, tahini, shoyu, onions, parsley, bell pepper, garlic, cayenne, ginger, cumin, and paprika. Add the crushed, drained pineapple into the mixture till well blended. Shape with dampened hands into a rounded ball shape and place carefully on a plate or dish (approx. eight inch diameter) Press the chopped pecans onto the ball with your hands so that the ball is covered with nuts. Can garnish with a parsley leaf and dried cranberry if desired.

              Peach Cake with Cherry Vanilla Frosting        
    This squirrel thinks it's hot.
    The cats think it's hot.
    I think it's hot.

    So what should we do when it's hot?

    BAKE A CAKE!
    I KNOW! I come up with the greatest of ideas.

    Even though it's hot, you know what it means, right? It's peach season! And aside from eating them normally, baking them into stuff is a great way to get them into you if you're not keen on eating fruit that drips down your arms (What's wrong with you?)

    We used some fresh peaches we had sitting around, because if I can avoid using something from a can I do. Plus, I love fresh stuff.

    Boyfriend and I frequent the "reduced item" racks at the grocery store each time we go, and we find the best deals. We picked up a yellow cake mix box for $1.50 because it had a dent in it. Nothing bugs me about a dented box. Aside from that, we had a coupon for this!:
    My coupon gave me the frosting "mix in" for free, saving $1. All in all frosting my delicious fresh peach cake cost me $2.

    Andrew thinks the "A" is for Andrew. It's totally for Amanda.
    CAKE TIP: When icing your cake, put waxed paper under it so that you can just slide the waxed paper out and your plate won't look like a very inexperienced baker frosted your cake.

    So for reals guys, this came out delicious. I had a slice before dinner. I'm having a slice after dinner. I'll have one at 3am that no one will tell Andrew about. So I bring you:

    Fresh Peach Cake with Cherry Vanilla Frosting

    Ingredients:
    1 box yellow cake mix
    2 peaches
    3 eggs
    3/4 cup of milk (whole)
    1/2 cup veggie oil
    1 small packet of orange gelatin

    Method:
    Preheat oven to 325 degrees.

    In a medium sized mixing bowl, combine the box of yellow cake mix, milk, veggie oil, and gelatin. Mix it together and add each egg separately. Puree the peaches in your food processor or in my case, use the chopper attachment to your immersion blender (my FAVE kitchen tool, second to the mandoline). Fold the peaches into the mix and whisk with your whisk attachment or use an electric mixer for a few minutes until it looks like this:
    We used two cake pans (make sure if they're non-stick you put cooking spray at the bottom!) and put them into your oven for 28-32 minutes (or until a toothpick comes out without any friends attached to it).

    Once baked, let them cool then frost with the above mentioned pre-made frosting.

    This cake was incredibly simple to make and it was amazing!

    What fruits do you like in cake? Do you have a preferred fruit that you only eat fresh?

    Thanks for stopping in!

              BITCOIN : What is it? How does this work?        
    Okay, so maybe you would've been wondering from a while that what is a bitcoin that everyone has been talking about, how does this thing work and more importantly, how you can get them. So calm down peeps because you are about to know everything.


    Basically, a BITCOIN is peer-to-peer payment system and digital currency introduced as open source software in 2009 by pseudonymous developer Satoshi Nakamoto.
    It is a cryptocurrency, so-called because it uses cryptography for security. Users send payments by broadcasting digitally signed messages to the network. Transactions are verified, timestamped, and recorded by specialized computers into a shared public transaction history database called the block chain. The operators of these computers, known as miners, are rewarded with transaction fees and newly minted bitcoins.

    Don't be surprised after knowing this but Bitcoin's exchange rates are way to high. It is damn true 1 BTC(bitcoin) has currently 952.5 USD. Isn't that super crazy. If you don't believe feel free to check the exchange rates here yourself.

    That was the very basic knowledge about a bitcoin. But how does this thing work exactly ? This is the question that causes confusion. Here's a quick explanation.

    Well you can watch this video for explanation of what is a bitcoin and how does this thing work or you can read the summary after the video.